playlist stringclasses 160
values | file_name stringlengths 9 102 | content stringlengths 29 329k |
|---|---|---|
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 9_Dopant_Diffusion_Numerical_Techniques_in_Diffusion_E_Field_Effects.txt | JUDY HOYT: All right. We're beginning the lecture on handout number 15. This is the second lecture on dopant diffusion and profile measurement. And we're working on chapter 7 of your text. So last time, we introduced some of the basics of dopant diffusion. We talked about basic definition of sheet resistance, and the scaling requires from us that we pointed out there's a conflict between the need for smaller series resistance and the need to decrease junction depths to improve control of the channel charge. We also talked about short channel effects and how dr decreases as l goes down. We mentioned the use of predeposition, either in the gas phase, or by ion implantation. And then the use of subsequent driving of dopants to create a doped regions. We spoke about macroscopic diffusion from the point of view of Fick's first and second laws, and gave a few cases. There were only about three or four cases where there are simple analytic solutions to the diffusion equation. Today, I want to just briefly review those analytic solution examples, and then talk about a brief introduction to numerical solutions to the diffusion equation. Then we'll talk about design of diffuse layers based upon certain device requirements, such as the sheet resistance. And our first deviation from Fickian diffusion, we'll discuss electric field effects on dopant diffusion. So let's go on to slide number 2 and just review from last time this pictorial view of Fick's second law. What I'm showing here is a volume element that is shown by the blue rectangular element. And there is a flux f sub in going in the left-hand face, and a flux f sub out going out the right-hand face. Remember again, our basic definition of flux is, it's a number of atoms or particles per unit time per unit area. So looking at in this cube, we can calculate the net flow of atoms into this volume, into that small volume. As shown in the upper left corner is equal to the area a of one of those faces times the difference between f out minus f in. So that's the net flow of atoms. And we can write that as a times-- and just looking at the geometry, it's delta x times the change in concentration, delta c, divided by delta time, delta t. So we can just rearrange this a little bit. The a's go out, and what we see is that the first derivative of the flux, delta f over delta x, is equal to the time change, delta c over delta t. So mathematically, if we want to write this, not in terms of deltas or differences, but in terms of differentials, what we see is a time rate of change of the concentration in the box, partial c, partial t is equal to the first derivative of the flux, partial f, partial x. And then we can substitute in the equation on the center of the page for the flux. We know from Fick's first law that f is equal to d, partial c, partial x. So that is, indeed, a way of deriving just bookkeeping and deriving Fick's second law. Now, the equation in the center of the page can be simplified somewhat if we are in the special case where the diffusivity is a constant, where it doesn't vary in space. I can pull the d out of the partial derivative with respect to x, and we just get partial c, partial t. So the time rate of change of the concentration is equal to a constant diffusivity. It's a number times the second derivative of the concentration with respect to x. So that is-- at the bottom of the page would be then the special case of Fick's second law when the diffusivity is a constant. So let's go on to slide number 3. And these are a couple of cases we solved last time where we wrote down the solutions last time for two cases which are commonly encountered and for which analytic solutions are available. On the left-hand side, I'm reviewing the case of a predeposition. And again, by definition in this case, we are assuming that there is a supply of dopant, or atoms that are diffusing, that holds the surface concentration at a constant value c sub s. That number is fixed. And we said in this case that the shape of the profile, c, as a function of depth x and time is given by this surface concentration c sub s times the complementary error function of x over 2 square root et. So that is the actual analytic solution to Fick's second law for a constant diffusivity. And we also know that the integral of that function integrated over our space, or throughout the silicon, has a value q. That's the dose, so the area under the curve. And that can be solved for, and it turns out to be 2 c sub s over square root of pi times square root of dt. So this is a particular case and, in fact, showing a plot on the vertical axis is c over cs as a function of depth. For several different times, the three curves here shown where the square root of dt-- or 2 times the square root of dt is being varied from 0.1 micron all the way up to 1 micron. And you get a feel for how the shape of the complementary error function evolves with time. In contrast, on the right-hand side, we're showing the case of a deriving that would typically be done after Pre-Dep. And deriving takes place usually under the assumption of a constant dose q, so the constant area under the curve. And what's happening as a function of time then is the shape of the curve is changing, and it's-- we know the solution is a Gaussian, the solution to Fick's second law. And in fact, if you look at this equation as a function of time, you'll see that the concentration of the surface is dropping as a function of time according to 1 over the square root of dt. And the profile is broadening its width or its standard deviation is increasing according to dt. And there are a couple of different cases shown on the right. So these solutions are valid under this particular case where the concentration of a diffusing dopant is less than n sub i. And that implies in that case that the diffusivity is a constant. It doesn't vary in space. We'll talk in subsequent lectures about the case when the concentration is quite high, or higher than n sub y. So let's go on to slide number 4. And this review is from last time. What we showed is an Arrhenius plot of the intrinsic diffusion coefficients of some common dopants in silicon. And what we're showing on the left axis on a log scale is a diffusivity and units of centimeters squared per second. And the x-axis is 1,000 over temperature in Kelvin. The lower x-axis, the upper x-axis, actually has the temperatures actually indicated. So we are intrinsic diffusivity. So they apply when in the carrier concentrate of the concentration of the dopant is less than ni at the diffusion temperature. One thing you can notice right off the bat is that there are some fast diffusers in silicon, such as boron, phosphorus, and indium. They're the upper three curves. And the slower diffusers are arsenic and antimony and silicon. It turns out just for practical reasons because of their high electrical solubility, that boron is often the p type opener of choice. It pretty much is. And arsenic in n type dope as you can choose arsenic or phosphorus, but arsenic is usually chosen because of its higher electrical solubility, and also sometimes its lower diffusion coefficient. It's easier to control profiles. So what-- typically, we're going to have issues with controlling diffusion in p type regions because boron is diffusing at any given temperature, almost an order of magnitude of higher diffusivity than the case of arsenic. All right, let's go on to slide number 5. And here what we are trying to talk about is the effect of successive diffusions in, say, a Moss process flow on the doping profiles. You remember from the first lecture of this course, we talked-- we kind of walked through an example of an integrated circuit fabrication process. And you know that they involve a large number of different steps at various temperatures. We don't just simply do a Pre-Dep and then a drive-in and then it's done. That could be oxidations, it could be other high temperature steps in which the dopants are going to move. So if you have a whole series of steps, what's the question is, what is the actual final profile, and how can we approximate it? Well, it turns out in the case of Gaussian diffusion that we can write the total effective d2 product as being a measure of the thermal budget that's used in that process. And for Gaussian diffusion, we can actually sum up the products of the diffusivity in any given time step times the length of that time step, d times t. We can sum up those dt products and add them up, and end up with an effective dt for the entire process. Again, assuming that in each case, the diffusion satisfies the Gaussian assumptions. What we'll typically find as you go through a whole process is that some of the processing steps may be negligible. And that's because the diffusion coefficients in this formula is exponentially activated, or it varies with temperature exponential. So the highest temperature steps in the process typically dominate. It's not always the case, for example, when we talk about transient enhanced diffusion. We see that's not the case. But for normal diffusion, you expect the highest temperature steps to dominate this equation. And so you can usually zoom in on those as being the ones that primarily determine the dt product and, therefore, the shape of the profile. One thing you have to be aware of is that this Gaussian solution we've been talking about only holds if the delta function approximation for the initial profile is reasonable. We talked about this last time. So it holds in the case where the width of the profile initially when you start is small compared to the final width, or compared to the total dt. If that assumption is not valid then, and we don't have a simple Gaussian solution, necessarily. OK, let's go on to slide number 6. And here, we're going to go through some principles and examples of the design of diffuse layers. So what we want to do is, as an engineer, a process engineer, or working with device engineers, we'll want to typically design a diffusion process, or a Pre-Dep and diffusion and a deriving process so that we end up with a certain sheet resistance that's required electrically by the device. So that's the goal of this section. And I'm showing on the left-hand side just the example of a cube of silicon where the top region is colored in orange, and that is the little sheet through which we're going to be passing a current and measuring the voltage. And this little sheet has a particular geometry. What we're going to consider is the case where the length of this resistor is actually equal to its width. So it's a square, and it has dimension w by w. And so we're going to define the sheet resistance of this junction. Last time we talked about it as being the resistivity, which is an intrinsic property of the material, depending on how it's doped and things, divided by xj at the junction depth. We can see xj marked on the left-hand side. And the units of that sheet resistance then are given it to be ohms per square. So the sheet resistance is, indeed, the resistance r that you would get in a resistor made of 1 square of this material. And we can either write it as Roman r sub s, or rho, Greek-- the Greek letter rho sub s. And the s indicating it's for a sheet. The resistivity rho, remember, is given by 1 over a cube with electronic charge times the carrier concentration n times mobility, which is in general, a function of n. And the resistivity has units along centimeter, xj has units of centimeter, they cancel out. We end up with a sheet resistance in ohms per square. That's all reasonably intuitive assuming you have a constantly doped-- a constant doped layer. For if a layer is non-uniformly doped, you need to integrate the doping profile times the mobility in order to calculate the sheet resistance. So the equation shown near the bottom of the slide says, sheet resistance can be obtained by 1 over the integral q times the integral from 0 to the junction depth of the doping-- of the carrier concentration profile, and of x, or the doping profile minus nb. Or nb is the background concentration that tells you where you cut off the electron concentration. That quantity times the mobility, which is, in general, a function of ndx integrated. So that's what you would do in the case of a non-uniformly doped layer where the doping concentration is varying in thickness. Now, it turns out this equation you can solve it yourself. But this equation has already been numerically integrated by Irvin for a couple of different analytical profiles. In particular, the case of the Gaussian and the complementary error function profiles. And the solutions to those integrals are given in your text, either in Chapter 7, or there's also an appendix where the solutions are given. So let's go on to slide number 7. And at the top of the slide we're showing an example of Irvin's curves for the particular case of p type Gaussian profiles in an n type background. And what you see in these curves on the y-axis is the surface concentration. So that would be c sub s, and atoms per cubic centimeter. The x-axis now is effective conductivity, and it has units of ohm centimeter to the minus 1. So let's see if x-axis-- you can just go to the bottom of the slide, thinking about it, that is the effective average conductivity, so the inverse of the average resistivity. So we can write this as a signal bar and it's equal to a 1 over the product of rho sub s times xj. So it's 1 over the sheet resistance times the junction depth. So there are three key parameters we need to identify and using these curves. There's a surface concentration, which is the y-axis in the profile. There is-- again, this is for a particular type. This is for a Gaussian profile. There is the sheet resistance, which is coming into this effective conductivity. That's really part of the x-axis. And there's the junction depth. So the x-axis has sheet resistance and junction depth built into it. And now the metallurgical junction is the place, the point in depth, where the chemical concentration in the diffuse region of the fused dopant equals the background concentration cb. And we'll do an example shortly after this where you can see how these curves are used. But each curve in the different color is for a different background concentration. So this, basically, tells you that you can take these three key parameters, and they have a unique relationship among themselves for the particular case where the shape of the profile is known. In this case, it's Gaussian. So in fact, let's go on to slide number 8. We're going to walk through-- the best way to understand these Irvin's curves is to go through an example. And what we're asked to do in this example is to calculate the deriving conditions, which means the temperature, and the time to produce a CMOS p-well. So it's a p type, a legion, and it's called a p-well of a certain thickness. And what we want to do, we're told that the constraint is that the sheet resistance of that p-well shouldn't be 900 ohms per square. OK. And it has to have a certain junction depth, which is shown schematically in the figure xj to be 3 microns. And that xj, again, is the point where the boron concentration then will reach the background concentration, which is given by 10 to the 15th per cubic centimeter. So again, we're informing a p-well by ion and planning bond into the region, onto the wafer. And the region on the left is blocked so we're just putting it in one region. And typically, you would implant boron at a certain dose, say, 10 to the 13th, and energy. And then we want to drive that in. And we're interested in the driving conditions, what temperature. In time, what we need to get it to go that far and to have a particular sheet resistance. And we're going to use Irvin's curves to do this. So let's look on slide number 9. So these are the constraints again repeated. The sheet resistance is 900 ohms per square. The junction depth has to be 3 microns, and the background concentration, either nb or a substrate concentration, maybe nbc is 10 to the 15th. So given the requirements that we have been told, I can actually immediately find the x-axis on the Irvine curve with the average conductivity layer. It's just 1 over the sheet resistance times xj. So the average conductivity calculates out to be 3.7 ohms centimeter to minus 1. We can now from Irvin's curve, we can obtain, assuming this is the Gaussian, that's an assumption we need to make at this point, which we'll show is reasonable later, we can then obtain the surface concentration. So in order for me knowing the shape of the Gaussian, basically, Irvin's intervals tells us that the surface concentration has to be about 4 times 10 to the 17th per cubic centimeter in order to achieve an average conductivity that has been specified as 3.7, assuming the junction depth 3 microns. So that's-- and if you can-- we want to see how we actually do that, we can go on to slide number 10, and this actually shows how the curves were used. Again, this is for a p type Gaussian profile. And all we've done is on the x-axis picked out-- it's a little hard because this is a long log plot. And the x-axis picked out the average conductivity to be 3.7. And that's a little tricky to find that, but you can. And then we see-- we read off the y-axis as shown by that horizontal line service concentration about 4 times 10 to the 17th. And again, we have multiple curves we can choose. I choose to use the orange curve, which is for the case of a background concentration, in this example, of 10 to the 15th. If the background concentration, say, had been a little bit higher, say, 10 to the 16th, or 10 to the 17th, what you see would have happened is that the surface concentration would have actually moved down a little bit on the y-axis. So it would be intersect a slightly different curve. So that's how we come up with the surface concentration. So let's go on to slide 11. And thinking about that surface concentration, c sub s, we know it's well below the solid solubility of boron and silicon. It's only 4 10 to the 17th. And it's probably likely below the value of n sub i at the diffusion temperature. Again, at 1,000 degrees, which is probably somewhere near or at a minimum temperature you'll be using to drive this in, n sub i is about mid 10 to the 18th. And so mid 10 to the 17th is well below that. So it's probably valid then to say that the profile is Gaussian. It's probably valid to say that it is a constant diffusivity d. And, therefore, we can assume this Gaussian type of solution because it's a deriving. So we can write down by inspection the equation in the center of the slide, which is the Gaussian equation. And then we know that at the point where x equals xj, the carrier concentration, or the doping concentration, c of the boron reaches that of the background. That's the definition of xj. So if you look in the middle of the slide, we're saying the background doping-- background concentration cbc is equal to the surface concentration, c sub s, which we just located, which is found times exponential to the minus xj squared over 4 dt. That's the definition of xj. So we know xj in this equation we just found, c sub s. We know the background doping concentration. The only thing we don't know is dt. So we can invert that equation and solve for dt, plug in all the numbers and you get a dt product. In order to obtain this Gaussian of 3.7 times 10 to the minus 9 centimeters squared. That's the dt product, the so-called-- if you would like to use the term thermal budget for the deriving process. So that gives us an idea. And we don't know what d is, or what t is, but we're going to design a certain temperature in time. So we constrain to the fact that the dt product will be this number. So let's go on to slide number 12. And again, I'm repeating the dt product from the top, 3.9 times 10 to the minus 9, 3.7. Let's just assume a temperature and see what happens, see what time is associated with that. And if it's a reasonable amount of time. So let's say we know we're diffusing pretty deep down to 3 microns. So let's just say we assume we drive in at 1100 degrees centigrade. And then you can look up from your plots, or you can calculate diffusivity of boron at 1100, it's about 1.5 times 10 to the minus 13 centimeters squared per second. And so then you can solve for time, the drive in time, calculate that out, you get about 6.7 hours. That's plenty long. That tells you probably from a practical point of view, you would not want to do this drive in any lower temperature because it would just take too darn long. OK. So now one thing we'd like to calculate, we have figured out the surface concentration from Irvin's curves. And we have the dt product. It will be interesting to calculate the initial dose, or the dose, the area under the curve, for the Gaussian profile. We have everything we need. Again, we know the dose q is the surface concentration c sub s times the square root of pi dt. And we have all those quantities. We can plug them in and we get q equal to 4.3 times 10 to the 13th. Because the integral under the Gaussian. It's a relatively low dose compared to our implant range. So it can be implanted. So this dose can easily be implanted into a very narrow layer close to the surface, which justifies implicitly that narrow layer might only be 1,000 angstroms deep. And that justifies our implicit assumption in using a Gaussian profile that the initial distribution would be a delta function, maybe only again, a few thousand angstroms deep. And you're going to a final junction depth that is 3 microns. And certainly, that seems justified using the Gaussian profile. OK, let's go on to slide number 13. And now, that was the case we just talked about something we had implanted, the dose. Let's just take the case where if we used a gas phase Pre-Dep step as an example to get the boron initially into the wafer. And so that was done at 9:50. You can then use the charts in your book, or in the handouts, to look up the solid solubility of boron at 9:50, and you'll see it's pretty high, it's 2.5 10 to the 20. And at that temperature, we can also look up the diffusivity. It's about 4 10 to the minus 15. So you can solve for the dose in the complementary error function profile q. You have everything you need in that equation 2 c sub s over the square root of pi square root of dt. And so the time that you would require to deposit that dose into the wafer t of the Pre-Dep can be solved and you find about 5.5 seconds. So you now see that the dt product of the Pre-Dep, which you can calculate, it's 5.5 seconds times the diffusivity at 9:50, which we see from above is 4 10 to the minus 15th. So that dt Pre-Dep is about 2 times 10 to the minus 14th. And comparing that to the dt of the drive in, which we solved for earlier, which is about 1 and 1/2 times 10 to the minus 13th. So in an order of magnitude difference there. Again, this tells us that the assumption of Gaussian diffusion is justified, whether-- even if we've done a solid phase Pre-Dep. OK, so that's the end of slide 13. That certainly gives you an idea of how these Irvin curves can be used to design a process. The alternative you could use is to read off. If you go back to just momentarily slide 10, you might-- the problem might be specified differently. You might be given some kind of surface concentration, and background concentration, and have to back out what the effect of conductivity of the layer might be. So different ways of getting those curves. So let's go on now to slide number 14. And I want to shift gears a little bit because we have more or less covered a few cases for which the analytic solutions can be obtained. And those are important cases but, primarily, when the dopant concentration is low, or satisfying certain special conditions during Pre-Deps, it'll turn out there's most cases we cannot solve analytically. So these solutions have to be obtained by using computers using numerical methods. And so I wanted to introduce you in the next few slides to some of the simplest types of numerical methods so you have an idea, or a feel, for how you yourself could implement these numerical techniques, if you need to do it. And there are textbooks on this topic, in particular, if you want to look at more detailed information on numerical solutions of diffusion equation. There's a book by J Crank called, The mathematics of diffusion, has a chapter on numerical methods for this particular equation. The nice thing about numerical methods, they can be used to solve arbitrary initial starting profiles. So you don't have to make any detailed assumptions-- arbitrary boundary conditions, and in the case where the diffusivity varies in space. So if you have a concentration dependent diffusivity, which will be the case we'll talk about in later lectures when the dopant concentration is quite high and you can use numerical methods. So what I'm picturing here on slide number 14 is a very schematic picture of atoms moving around between planes in a lattice. And so let's use this picture to get some physical insight into the diffusion process. So each plane here is indicated by a vertical bar. And 0, 1, 2, 3, 4. And the distance between these planes is given by the distance delta x as units of length. And in sub i here, so n0, n1, and 2 is the planar atomic density of the i-th point. So it's the number of atoms per square centimeter in a given plane. And c sub i has a volume concentration, and it's the average volume concentration at any point. So you can calculate and you can get between the two. c sub i is just n sub i over delta x. Or n sub i is just c sub i times delta x. So let's go on to slide 15. At the top, I've just reproduced that picture. It's only a smaller version of it. And just reminding ourselves of the relationship between ci and ni. Now, as we're looking at this lattice, we know that the atoms are relatively fixed in the lattice. But they do vibrate about their average position in a plane, where average position. And they're vibrating at some frequency, b sub d, which is called the Debye frequency. And that's-- that number is typically on the order of about 10 to the 13th per second. So it's a pretty high frequency. You're vibrating about their average lattice positions at a finite temperature. Now sometimes and during the vibration, an atom will actually be able to surmount the energy barrier that exists in going from one lattice position to the next, or one plane to the next. So it'll actually be able to hop to an adjacent plane. And there's an energy barrier, which we'll call an e sub b, to get from one plane to the next. And this hopping frequency we call b sub b, which is just a Debye frequency, vd-- that's the vibration frequency-- times the Boltzmann factor exponential, the minus eb is the energy barrier over kt. So this v sub b, small v sub b, gives us an idea of the frequency of hopping from atoms from one plane to the next. And what we assume in this derivation is that there's equal probability of jumping right or left because this is a random process. So if we look at any given plane, we can say the number of atoms jumping to the right of that plane per unit time is just vb over 2 times the number of atoms in that plane n. Path 1 go right, number jumping left, per unit time is just bv over 2 times the number in the plane n. Those two are equal. Now, if we do the bookkeeping and say, look at the number of atoms crossing a plane, the particular plane at i equals 2 in the above diagram. So the plane label 2, the number of atoms crossing that plane per unit time. Well, we can just write that as a flux. And f is an equation on the bottom. Well, that's just going to be the number of atoms, basically, per unit time jumping to the right minus those jumping to the left. So you write that as v minus vb over 2, n2 minus n1. Again, assuming equal probability going either way. And then you can convert the difference between atomic density and number of atoms per cubic centimeter just by multiplying, using the definition of n is equal to delta x times the concentration. You have then expression for this flux crossing plane i equals 2. And then you multiply and divide by delta x and you end up with this equation that this flux is equal to-- looking in the right-hand side-- minus bb over 2 delta x squared times the delta p over delta x, something that's starting to look like a slope. So immediately see some kind of thick law-- Fick's law here where a flux is related to a slope times something out in front that is a constant. So that's at the bottom of slide 15. Let's go on to slide 16. And I've just repeated that equation that we just had, but I added one more thing on the right, one more equality. And we set that flux equal to a diffusivity, a number d, times the slope delta c over delta x, where we define the diffusivity in this particular equation to be equal to the vb over 2 times delta x squared. So there is Fick's first law. The flux is equal to minus a diffusivity, a constant number, times delta c over delta x. And then the diffusivity has some atomic scale mechanisms of u to it. In fact, it's proportional to the jump frequency, v sub d, and therefore, proportional be related to the energy barrier to hop between adjacent positions, or adjacent clients. So it gives you some idea of what-- at the time of scale, what goes into determining the diffusivity d. OK. In addition to giving us an atomic scale view, we can actually use this type of formalism to go ahead and derive a numerical solution to a fix equation. So let's go on-- to do that, we'll go on to slide number 17, which is a slightly different sketch, but the exact same idea is that what we're going to be going through. And what I'm showing on the left-hand side in the bottom is a plot of concentration as a function of x, so distance. And let's say it has some shape, this dark, dark decreasing black line. And it's decreasing from left to right. And what we'll then do is we're going to discretize this, and then look at discrete positions in space x. And we'll look at concentration C0, C1, and C2, and the spacing between those planes is going to be a constant, just like we used before, delta x. And so what we're going to look at is in the time interval, delta t, a certain time interval, delta t, what is the number of atoms crossing a certain plane per unit area? And we call that to the plane r, which we had labeled r above, q sub r. It's just the flux, f sub r times delta t. That's the number of atoms crossing the unit area in a given time interval. That's just basically-- this comes from the definition of what flux is. And we then can write in for the flux. The flux is just the diffusivity d times the concentration difference across that plane, across plane r. Well, the concentration difference across plane r is C1 minus C0 divided by delta x. So we can approximate it that way, the flux crossing plane r. What's the flux crossing plane s? Well, we can do the exact same thing, or the number of atoms crossing plane s. It's just the diffusivity times delta t times this time. We subtract c2 minus c1 over delta x. So we can then do some bookkeeping and calculate the net number of atoms accumulated in that shaded region between plane r and plane s. And the time interval delta t is just qr minus qs. And we just subtract these two expressions and you see what you end up with is equal to a diffusivity d times the time interval delta t, times, in parentheses, the c0 minus 2c1 plus c2, that whole thing, quantity over delta x. Now, if we divide that net gain in the number of atoms divided by delta x is shown below, that is the net gain in the concentration in that shaded region that is delta c. And that's just qr minus qs, that whole quantity divided by delta x. So I can substituting in qr minus qs and you end up with the equation near the bottom, which just says the net gain in the concentration, delta c in this shaded region, is just the diffusivity d times delta t, times this quantity in parentheses, which is related to the concentrations in those three regions, divided by delta x squared. So what it says is, at a given point c1 in this curve, if I know the concentrations of the neighboring points, so I know its neighbors, I know the concentration at c zero and at c2, if you calculate a new concentration after a certain time step, delta t at c1, just by knowing its neighbors at this time step, the concentration of this neighbors. So if we go on to slide number 18, we can then use that information to come up with something called an explicit finite difference formula. That's what the equation being shown near the top of the slide 18. What you're saying is that the concentration at any given point i in one of these slabs, at a new time, t plus delta t. So we're going to evolve this profile over time. So c sub i plus on the left-hand side is equal to the concentration at time t. So one interval before that, whatever it was before, plus we add in this difference, this term on the right that gives the change in the concentration in the i-th slab after a time step delta t. Then that term is just d, the diffusivity, delta t divided by delta x squared times the quantity in parentheses. Where again, the quantity of parentheses in order evaluated at any point ci, all you need to know is the concentration at that point, and the concentration of its two nearest neighbors in whatever way that we've discretized. It's something we've discretized this continuous function into little intervals, delta x. So that is an explicit formula that you can use to evolve a concentration profile in shape over time. And if you look at it for a few minutes, you can notice the similarity of that difference formula to the differential equation for diffusion. Which if you take the partial t on this equation shown in the center, and move it over to the right-hand side, you can see that partial c is equal to some constant d times a second derivative of concentration with respect to x. And in fact, this partial second derivative with respect to x, it can be numerically evaluated as the term in the first equation on slide 18 on the right-hand side. That is the numerical solution to that differential equation. And I won't derive it here. If you go back to Crank's book, or any of the books on numerical methods and you find that the method is numerically stable. So it works from a numerical point of view. It gives reasonable answers for a particular condition. We have-- when we're doing the solution, we have a condition on a quantity r where this quantity r is what multiplies the concentration change in a slab at a certain time step delta t. That indeed, delta t over delta x squared, that multiplier has to be less than or equal to one-half. If that multiplier gets too large, then that's-- the whole method breaks down, become unstable, and you get nonsense, absolutely, totally unmeaningful solutions. So you need to keep-- you need to adjust if you're doing this numerical, make sure you adjust your time steps delta t to be small enough so that this holds. And your delta x to be of appropriate size with respect to delta t so that you don't get yourself out of the physically-- out of the numerically stable solution. It's a very powerful technique. It's very simple. That's one of the nice things about it. And it's certainly going to be required to use some method, numerical method, when the diffusivity is not a constant. If you were to sit down, you could write your own simple simulator using this very relatively easy to use finite difference formula. It is not the particular type of numerical method that's been-- that is being used in SUPREM IV. SUPREM IV has a more sophisticated numerical solution method for the diffusion equation. This is much more simplified, and this particular method can be slow. It has all of its own problems. But the beauty of it is the simplicity. And just to give you an idea of how one might use numerical solutions to obtain-- numerical methods to obtain solutions of the diffusion equation. OK. Let's go on now to slide number 19. What I want to talk about at this point are from now on, and this is the rest of this lecture and the next couple of lectures, will be the so-called modifications of Fick's laws. Fick's laws are introduced in a lot of basic courses-- material science or physics. And when we are diffusing dopants in semiconductors in practical cases, there are a few number of cases where Fick's laws need to be modified to take into account the things that are really going on in semiconductor processing. And the first thing that we'll talk about here on slide 19 is so-called electric field effect. And this occurs in the case when the doping concentration is higher than n sub i. So n sub i is intrinsic carrier concentration at a diffusion temperature, again, of 1,000. n sub i is close to mid 10 to the 18th or so. If the doping concentration gets higher than that, you can start to have a non-uniformity in the charge distribution in the semiconductor, and electric field effects can become important. And this is a schematic demonstration of how electric fields can arise when dopants are diffusing and silicon. It's a plot of concentration schematically versus depth, and the red line is supposed to represent the profile at any given point in time for, say, arsenic that's diffusing. And then arsenic is a donor. It has one extra electron floating around it. But the atom itself in the absence of this electron, this extra valence electron, the donor itself is positively charged. It's an ion. This arsenic's diffusing and there are-- the electrons associated with this heavily doped arsenic region can diffuse ahead of the dopant. And so this blue line is meant to represent the distribution or the diffusion profile of electrons, let's say. So the electrons diffuse, and holes diffuse more rapidly than their associated donors and acceptors. Because of that more rapid diffusion, they'll diffuse-- the profile of electrons will be a little bit deeper and diffuse ahead of the dopant that they're associated with until they reach some steady state condition where the drift flux from the internal electric field is going to balance the diffusion, the diffusion flux. And so what we look at here is there, if could just look at this schematic picture, you'll see there is indeed from your elementary electronics, or solid state physics, there is an electric field induced by the fact that there's a net positive charge on the left region. That is, if you subtract all the arsenic donors, you subtract electron concentration there, there's a net positive charge on the left. There's a little bit of net negative charge on the right that produces an electric field pointing from left to right. So that electric field then tends to cause electrons to drift from right to left, right? Because they drift against the field. So they're going to be drifting from right to left while they're diffusing from left to right, because there's a concentration gradient. So the electric field will build up to the point where there's steady state where that drift flux just balances the diffusion flux for the electrons. And again, the physical origin of this electric field is the fact that, well, electrons are charged in holes, and they have somewhat higher mobility. They can move around the lattice much more readily than the dopant ions themselves. OK. And again, this is only going to happen to the particular case where the concentration of the dopant is higher than n sub i. Otherwise, the background electron concentration is set by the temperature. It's just a constant to throughout space. So the electric field would not develop right at the edge of this diffusion profile. Let's go on to slide number 20. And so what we do is, we modify Fick's first law, and we modify it in a way that's very similar and pretty much almost identical to what we do in our electronics, or solid state physics classes. We say that the total impurity atom flux in the presence of both a concentration gradient and an electric field can be written down as a sum of two fluxes. One is f, which is the normal Fickian diffusion flux, which is given by minus d, partial c, partial x, plus f prime, where f prime is now the drift flux due to the electric field. And for the drift flux, I wrote down z, which is going to get the charge. The charge number z would be 1 for electron, depends on the number of electronic charges associated with whatever is drifting, times the mobility mu, times the concentration c, times the electric field. That is the flux due to drift. Again, it's not a diffusion processes, that's the drift process, a different physical process. You can add these two together and write down as the total flux. So we have a non-Fickian term on the right. But because of the Einstein relation, we can actually, when we're talking about the diffusion of carriers, like electrons and holes, we know that their mobility is directly proportional to the diffusivity and the proportionality constant is q over kt. So mu is equal to d times q over kt. So we can substitute in for the mobility in the upper right-hand equation with a substitute in q over t times d. We can also substitute in the fact that electric field e is the gradient of the potential where the potential psi can be related to the carrier concentration in as minus kt over q times the ln of n over ni. So basically by substituting in the expression proportional diffusivity for mu, and substituting in for the electric field, how it relates to the potential and, therefore, to n over ni, we get the equation near the bottom for the flux f, which is the usual Fickian minus the times partial z, partial x, times minus d, times the concentration c, times now we've come up with this psi term, partial partial x of mu potential within n over ni. So now this is the new flux equation, and we can-- now we use for the carrier concentration. And you have to be careful. It's not just equal to nd because ni can be of order nd in this. So we need to make sure we use the full expression for the carrier concentration. And it's one-half-- the bottom equation here is one-half times the quantity, square root of nd square plus r and i square plus nd. That comes from charge neutrality. Given that, we-- therefore, there's a relationship between end, the carrier concentration, and the concentration of the dopant c. Because again, the concentration c is just nd minus na. So it's the net concentration. So we have a relationship between n and c. We're going to use that to make this flux equation f to be written only in terms of the concentration c. So we go on to the next page, slide 21. What we find is that we can do this compression in this equation, and we find that the total flux now f is written as something called minus h times the diffusivity of a dopant d sub a, partial c, partial x. So we've been able to actually write down an equation that looks just like Fick's first law, but there's an h factor multiplying it. And this h factor, it turns out to be equal to 1 plus c divided by the quantity square root of c squared plus 4 ni squared. So basically, it looks-- in the presence of an electric field, it looks just like we have an enhancement in the diffusivity of the dopant by a factor called h. And you can-- we just worked that out. So interestingly, we see that it-- electric field effect on a dopant that is creating the field on that dopant itself, it enhances the diffusivity, and it turns out h has an upper bound of t. So h is not going to be any larger than 2, and that means that the electric field enhancement term can enhance the diffusivity of the dopant that causes the field by as much as, or a maximum of a factor of 2. And this happens when the doping concentration term, or c is much, much greater than ni. And you can see that by substituting in and calculating out what h is in certain conditions. In the special case where c is ni, then h goes back to being 1. So how do we get to this equation where the simplified h factor? Well, if you go on to slide number 22-- actually, I'm not going to go through all the algebra. But basically, what we did was, we used a particular fact, which is that-- along the top of slide 22-- partial partial x on the logarithm of x is equal to 1 over x. And what we needed to find in that earlier diffusion and that earlier drift term, there was a term that went partial partial x of ln of n over ni. And so just by going through the algebra shown on-- we can substitute-- on slide 22, we can substitute for the carrier concentration n, and we can find an expression for partial partial x of Lin of n over ni. And you see that in the middle of the slide. And then we substitute in that the dopant concentration nd is equal to c. And we're able to see then that the flux, the total flux is the usual thickening term d minus d times partial c partial x, minus this other term, which ends up being able to be written just like partial c, partial x in terms of partial c, partial x, times the h factor where h is equal to, as I show on the bottom, 1 plus c over the square root of c squared plus 4ni squared. So that's just justifies the derivation of the h factor on slide 22. OK, so just go back to slide 21 for one second and just want to remind ourselves that the total flux f now in the presence of a high carrier concentration can be written just like the Ficking diffusion, but the h, there's a multiplier h from the diffusivity of the dopant atom a. Let's skip on to slide 23 now. However, so that's what we're talking about the amount of enhancement you might get in the high concentration species. But you may have two different species diffusing. And in the case where the species are at different concentrations, the electric field term can actually cause an even larger change in the diffusivity of the low concentration [INAUDIBLE].. So let's look at a particular example shown on slide 23. And what we're showing here is concentration as a function of depth, and the red curve in high concentration, let's say the high concentration arsenic profile. The initial arsenic profile looks-- it was on a planet or something it looks like this as shown in the solid red curve. The background doping is initially all constant, it's p-type. So this boron is shown by the solid blue line. Now, we're going to simulate including electric field effects at 1,000 degrees, the diffusion of the arsenic and the boron. Well, it turns out the arsenic does indeed diffuse, as shown by the dashed line. But that the electric field that it produces right near its-- the slope right near the edge of its profile, actually, that electric field can cause a flux of boron ions, which can cause the bond to itself to move. So the final boron profile now is shown by the blue dashed line, has this little hump in it and then a dip. And that little dip occurs right near where the junction is, right where the boron-- the arsenic in the boron cross. And there's no way by Fickian diffusion you would ever predict that a constant profile of boron would end up being-- looking the way it does in this particular plot. Again, that's because of the second term, the electric field term, non-Fickian term. So when you want to think about that, how that occurred, look at the electric field arrow drawn from left to right. Again, drawing from positive, net positive to net negative charge, that direction of e pulls the bond-- and again, remember, the boron ions, as they diffuse, they're b minus, these acceptors and that negative charge. So they're going to be pulled in, they're going to move to the left by the electric field. They'll be pulled into the n plus region and deplete some of the boron in the region past the junction. And so you end up with this sort of clump and dip, which is a non-Fickian type of motion, but induced by the electric field created by the higher concentration arsenic species. So this can have quite a bit of an effect on the diffusion profiles near a junction. And we see this all the time in our devices. In fact, you go on to slide number 23, showing a simulation example of an MOSFET. These are two SUPREM simulations, both around 1,000 degrees. For MOSFETs, you'll recognize the polysilicon gate here in magenta. And the source and drain regions are arsenic, and they are shown in black. And if you look first at the left in the case of an electric field, you see these different contours, the green, yellow, sort of light orange, and then a dark orange, or red. We see the boron uniformly doped, and its contours are shown by the different colors. And now what happens on the right in the presence of an electric field, instead of these uniform lateral doping controls for the boron, you see there's an electric field that acts on the boron and it changes the doping distribution in its shape. In fact, they dominate-- it dominates the shape of the bond distribution, particularly, near the source drain junctions of the MOSFET device. The boron basically gets pulled in to underneath the source and drains, and there's quite a bit of loss of boron from the channel region into the source trains that do electric field effect that's not predicted on the left-hand side. So even though h, the h factor has been in the arsenic, diffusivity is limited to, at most, a factor of 2. So there's not too much effect on the junction of the arsenic, if it's in the black junctions. The regions don't change that significantly. But it dominates the diffusion of the boron underneath, or at a lower concentration. So let's go on to slide number 25. I just want to summarize this second lecture on diffusion. We reviewed a few cases of simple constant diffusivity. These generally apply well to the case of low doping concentrations in a seamless process flow that usually means the diffusion of the seamless wells, which are fairly lightly doped, say, below mid 10 of the 17th, or below 10 to the 18th. They're usually diffuse in at very high temperatures so n sub i is large. And they satisfy the fact that the dopant concentration was less than ni. So you have constant diffusivities. But beyond that case, turns out most diffusion problems in devices need to be solved numerically. And we introduced a very simple but powerful method for numerical solution of the diffusion equation. And it actually was a finite difference formula. It's not that sophisticated, and it's not actually the formulation that is used in SUPREM IV, but it gives you an idea if you needed to write your own simple method. We talked about the first type of modification we need to make to Fick's first law, and that was for electric field effects, which can enhance the diffusivity of a high concentration species up to a factor of 2, but can dramatically impact the motion of species at lower concentrations. And what we're going to talk about next time are further corrections to the simple Ficking diffusion that we've been talking about, and more realistic dopant profiles, such as the case when we have concentration dependent diffusivity. OK, that's all I have for today, and thanks very much. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 12_Ion_Implantation_and_Annealing_Analytic_Models_and_Monte_Carlo.txt | JUDY HOYT: So before we start with today's formal lecture, there's a couple of things I want to talk about. One is the schedule, which I have shown up here. And you'll notice, or if you remember, I've actually shifted around one of the lectures. Here we are today, I think October 19. And we're supposed to have this lecture on the SUPREM-IV process simulator. I decided I wanted to do that lecture later, primarily because I want to have talked about transient enhanced diffusion and those effects. A lot of the examples I show and how to use SUPREM relate to transient enhanced diffusion. And I realized I hadn't really-- I wouldn't have introduced it. So we're going to have instead the ion implantation lectures. There's four on ion implantation, two of those are on TED, and then we'll do that lecture. And as I mentioned also because of the Red Sox game last night, I don't have homework number 4 ready to go out to you yet. And so we can blame the Yankees for that. But I'll have it next time. There is another handout as well before we go to the formal lecture, handout number 20. It's a one-page document. Hopefully, you picked that up. I want you to start thinking about your term projects. And so it's a good time to start thinking about it. What I've listed here is maybe 15 or so potential topics that you might want to use for your term project. This is not an exhaustive list. These are just examples. So for example, you could study dopant diffusion in silicon germanium. You could read up on that. You could study laser annealing of ion implants for shallow junction. These are all things that we don't get a chance to cover any great detail-- selective epitaxial growth of silicon or silicon germanium. But they are related to front end processing I'd like you to learn more about in depth. So you pick a topic or you make up a topic. Any topic you pick or make up, it needs to be approved by me. And next time over the next couple of weeks, I'm going to bring in a clipboard. And you'll have a chance to sign up for a topic. And I also want to know whether you want to give an oral presentation or a written report. The written report is going to be-- it needs to be 20 pages or less. And an oral presentation in class is probably going to be about something like 15 to 17 minutes plus questions. And I need to start scheduling that so I need to know whether you would prefer to do your final term project as a written or oral. And the term project is not meant for you to go read one textbook or one paper and then regurgitate it. The whole idea is you go to a library, you go online, and you read a series of research papers, so maybe some texts, but most of these topics are new enough that you'll have to read research papers, and then write up a report or give an oral presentation about the topic. So I don't want you just to regurgitate one paper. And then the other thing is if you do decide to do a presentation, you'll need to have handouts for the class, just like the handouts that I give you, not as long. And if you need help making the handouts in terms of getting them xeroxed, my assistant will help you with that. So even if you do the presentation, there'll be some preparation of graphical information. So that's to get you started thinking. And as I say, next time I'm going to bring the clipboard and you can start signing up for topics. OK. So let's go to the formal lecture for today, which is handout number 19. And we're going to talk-- as I mentioned today, we're not doing the SUPREM-IV lecture. I shifted the schedule. And we're going to talk about chapter 8. The next four lectures are going to cover chapter 8 so I hope you will read it. Chapter 8 is about ion implantation and annealing. So we've already talked about introduction to the basic process flow in CMOS. We know how to fabricate wafers and clean them. We discussed in detail thermal oxidation and two-dimensional effects. And we spent the last four lectures talking about diffusion and how we model it. We're going to come back to diffusion after we've talked about ion implantation. But first, I'm going to spend the next couple of lectures talking about the basic concept of ion implant and how it's used, the impact of the crystal structure, there's a phenomenon that's called channeling, how ion implantation is modeled, and damage annealing, and transient enhanced diffusion. Ion implantation is somewhat unique in this course or in any course on processing. It is probably the only process that we can actually model from more or less first principles. That is, if you look at our models for [INAUDIBLE] models for oxidation, there are a lot of chemical processes involved, and there's a lot of unknown constants here and there. And the models tend to be very empirical. Ion implantation, on the other hand, is based on fundamental physics, a lot of it. And so it's one of the few processes you can actually model without too many free parameters. You'll see there are some parameters but not as much as other. So for people who like first principles modeling, it's somewhat satisfying. OK. So let's go on to slide number 2, showing you what I call the old fashioned or traditional approach to getting dopants into a wafer. You would coat it with oxide. You would open up a hole. And we talked about last time, last few times, doing what's called a predeposition. So that used to be, in predeposition, it was a high temperature in diffusion from the gas phase. Typically, in a pre-dep, you would have in the gas phase some kind of dopant gas like phosphine or whatever. And you would keep the concentration of the dopant at the surface at the solid solubility at that temperature at a very high value. And you would do it for a certain amount of time to put in a certain control dose. And then you would turn off the gas that had the dopant in it. And you would do a drive in, a deep diffusion drive in. A big problem with this method though, although it was the original method, it was very limited and it's very hard to control the actual dose, the integrated dose of the dopants that were put into the wafer. If you had an extra few minutes in the furnace or the furnace temperature was slightly different, you would get a completely different dose. So this method was good in the old days but it really gave way some time ago to ion implantation as the method. And the only problem with ion implantation, the good part is it has tremendous control of the dose. The bad part or the issues we have is that it does introduce damage. And we have to anneal that damage and repair the crystal in order to make a good device. So let's look on slide 3. And from this slide, we can understand why ion implantation was introduced into manufacturing in the late 1970s. And this is an interesting history. And it's also an interesting example because it gives you an idea of if you have a new idea-- let's say you're working here on your PhD and you have a new idea for some great new process. The new process, whatever it might be, has to really be an enabling technology. It has to do something that you cannot do by any other method because all new processes have some bad aspects to them. They cost money. There's development time. Ion implantation adds damage. Why would anybody want to do it? Well, this is why because it did something that they couldn't do in the past. And basically, in the late 1970s, ion implantation was introduced as the only means to accurately control the threshold voltage of a MOSFET. Without ion implantation, people couldn't make circuits because they could not control the VT at the level that they needed to be able to control it. So if you go back and look at this as an equation we introduced a number of lectures ago-- we didn't derive it. I just wrote it up there and you'll have to take it as truth. But what it says is that the threshold voltage of the MOSFET depends on a number of terms-- the flat band voltage. It depends on the doping in the substrate, the oxide thickness through SiOx and things like that. There's one last term here, which I circled, which is the electronic charge q times phi where phi is the dose of a dopant that's ion implanted into the surface of the wafer divided by SiOx. So this extra term, this isn't here unless you're doing an implant but that extra term can be added in. And if you can very precisely control phi, the dose, then you can very precisely control the threshold voltage and set it to a level that you need. So on the left-hand side, I'm just showing a schematic, just reminding you from the first lecture, what people do for a VT adjust implant. For example, if you're making an N-MOSFET on the right-hand side, you would ion implant with boron, a very, very thin surface region. You block off with photoresist, the PMOS area. And you would implant boron to adjust the threshold voltage. Typically, doses are pretty low, 10 to the 12, 10 to the 13, something like that. And energies in this range, we'll talk about dose and energy in more detail in the next couple of lectures. The nice thing is that this dose fee can be accurately controlled, and that's because it's based upon integrating a current, electronic charge that goes into the wafer. That's because of the way we do ion implantation. So it turns out the dose fee is just 1 over the area, where the area is the implanted area, times the integral over time of i. That's supposed to be an i, i standing for the beam current that's collected into the wafer, divided by the electronic charge, q, integrated over time. So because the ions coming in are charged, they represent a current flow into the wafer. So they're charged, and as the current flows into the wafer, you can measure the current with an ammeter very accurately. And then you can integrate it over time. So you start the implant opening up a shutter, and then you iterate over time. Each charge that comes in gets counted, and then you stop. So you can very, very precisely because it's based on charge counting techniques, not on some chemistry pushing away from into a hot furnace, hope the temperature is right. Oh, you pushed it for ten seconds too long. What? Pull it out. It's more of a physics-based approach. From a manufacturing point of view, that was an important breakthrough in the fabrication of MOSFETs. Let's just do an example on slide 4 that illustrates the controls that you can get during ion implantation. So let's say that we need to calculate the dose rate. OK, so that'll be the number of atoms per square centimeter going into the wafer per second or per unit time. I'm going to assume that we have an ion beam current, so that's the beam coming into the wafer. The beam current is 0.1 microamp. OK, and we measure that very accurately. It's a tenth of a microamp. And we assume we're implanting an area that's 20 by 20 square centimeters, just for the sake of-- we just know that's our area. So basically, I can calculate the dose. And just like that formula we just said, it's 1 over the area times the integral of the beam current, which is 0.1, over the electronic charge, dt. Now if the beam current doesn't vary with time-- say, it's a constant 0.1-- then the integral is easy. It's just the current times the total time of the implant divided by the electronic charge times the area for a constant current. OK, so the dose rate will then be, just take the dose and divide it by the total time. Average dose rate, dose divided by t, it's just the beam current divided by q, the electronic charge, times the area. So you end up with a dose rate. For that beam current, you're putting in about 1.6 times 10 to the 9th atoms per square centimeter per second. So that's pretty good control. Let's say, I want to implant 10 to the 12th atoms per square centimeter. Well, you can just divide that into 10 of the 12th, and you figure out how many seconds it would take. It's a reasonable amount of time. And then you can adjust your beam current and the time to make it something that's physically realizable. So this has tremendous control, very good accuracy, and it's very reproducible. It has good dynamic range because you can adjust your beam current down from something like 0.01 microamps if you want to do a very, very low dose. You're putting at a very slow rate. You put the ions in, and you integrate them up. Or you can do it if you have a very high dose. You might have to have a milliamp of beam current going into the wafer. Either way, you get very, very good control. So let's compare this ion implant case to the case of where I want to use the old-fashioned method. So instead of implanting boron, I'm going to do a boron pre-dip at 1,000. And again, in order to control this, we don't want it to depend on the flow rate of the gas, so we typically do make it so that the surface concentration is at the solid solubility for boron, which is about 10 to the 20th per cubic centimeter. So remember, when you're doing a pre-dip, you get a complementary error function profile and that the dose of the integral of that can be given by this equation. So the dose is 2 times the surface concentration divided by the square root of pi times the square root of dt, so where d is the diffusion rate, the diffusivity of boron in silicon at that temperature. And so we can look that up in the chart, so it's about 10 to the minus 12 centimeters squared per second at that concentration and temperature. So in one second, you've already put into the wafer 10 to the 14th atoms per square centimeter, a very high dose. So there's no way you could really do-- if I want an implant of 10 to the 12th, there's just no way you can get that low of a dose and with any kind of control by using a chemical pre-dip technique. So again, this was the big breakthrough in ion implantation. People found they needed to control threshold voltage, and they needed to implant relatively low doses very accurately so they get a very good number for VT. And pre-dip was not going to be a way to do it, so ion implant filled that need. OK, so let's go on to slide number 5. Before we get into the details of the physics, we'll just talk about some basic concepts. As I mentioned, it's the dominant method used today to introduce dopants, in spite of the fact that it has a huge amount of damage it does to the lattice. And these are all the good things about it. I've already mentioned control, but dynamic range, a large range of doses. It's essential to maintain or control the VT. You can make a buried or retrograde profile. Remember, with a complementary error function, if you plot the shape of the profile as a function of depth-- so this is depth, and this is the concentration-- in diffusion from the surface, you always get a higher concentration at the surface than you do here. But with an implant, you can actually-- if you want, you can make it look like that. You can bury the peak and make it look retrograde. So you have the ability to do that. You have the ability to control that. It's a low-temperature process. It happens at room temperature, which is an advantage. And there's a wide choice of masking materials. If you're trying to mask a diffusion, well, there's only a few materials you can put down, either oxide or nitride, that might keep the dopants from diffusing through there. If you're trying to mask an ion implant, you can use photoresist, a very simple, low-temperature material. There are some disadvantages, though. Of course, we mentioned that you are bashing up the crystal, and you're damaging it. And you have to remove that damage by heating it, and the heating causes some problems with diffusion. In fact, there's an anomalous type of diffusion due to ion implant damage that we're going to study called Transient Enhanced Diffusion, or TED, that happens due to the injected defects from the implant. Nevertheless, people have found out how to model TED, and they found, in some cases, ways around it. And the other disadvantage is you can insulate, and you can have oxide on the wafer. You can tend to charge it up, so you have to be a little bit careful. But there's ways to flood the wafer with an electron gun and all kinds of things to keep your wafer from charging up. If we go to slide 6, this is a schematic diagram of what an ion implanter looks like, at least a sort of a traditional diagram of how an implanter is designed. What you have here in this big box are two key things. And all of this is in a vacuum system, so this entire thing is being pumped down by a pump. You have an ion source. So that has a gas in it or something that ionizes the species of interest. Maybe you have a gas in there. You put BF2. OK, so BF2 gas or whatever. And then you're going to ionize, so you have a way with a hot filament of creating ions, and you create B double plus, B plus, fluorine, BF 2-plus. You create all these different ions, of which there are some that contain boron, which is what you want to implant, let's say, in this example. The ion source extracts the ions at a given energy, say, from 0 to 30 keV. So we can set the energy of the ions coming out just by knowing this extraction voltage, the extraction that we apply to this grid. So we know the energy of the ions coming out, and then we do a standard-- what's called an E cross B filter. This happens in any kind of mass analyzer. This is what's happening when you do secondary ion mass spectrometry or whatever. But if you know the energy of the particles coming out, and you bend them in a magnet, you can mass analyze them and only pick those ions that have the right mass. So I can be sure that I just implant boron. I don't want to implant fluorine. I don't want to plant nitrogen or oxygen or carbon or anything else that happens to be in the source. I only want to pick one species. In this case, let's say boron. So basically, how this works, if you go back to any of your elementary physics class, you know that what's happening here in the ions as they traverse this arc in the magnet, the centrifugal force is balanced by the magnetic force, the V cross B force. So that has to be the case for ions going through this arc. And you also know the velocity of the ions coming into the arc because you know their energy. And that is set by this extraction of voltage, the extraction. So I have a velocity that I can plug into this upper equation, and that is going to relate, then-- end up relating the mass of the ion that makes it through this resolving slit. That mass can then be related to the energy, which is related to the extraction voltage and the magnet current. So this is usually an electromagnet, which I can adjust the current. So all you need to do to be sure that you're implanting one particular species is, you adjust the extraction voltage, and you adjust the current going through the magnet. And then the only ions that make it are those that you want to implant with the right mass. So this is a way. And then once you have the ions that you need-- you've extracted out all the others, and you just have the ones you need-- then you can put them through a high-voltage accelerator to get them up to, if you want to implant very high voltages, say, 50 to 100 or 200 kilovolts. You focus the beam, and then there are a series of XMY scan plates where you have an E field that's taking the beam and moving it around like I'm moving this laser pointer right now, scanning it randomly across the wafer. And that's how you do the ion implant. And then the important thing is that the wafer is attached here to this device, which has a current integrator, which integrates the dose going into it. So it knows when to start and stop the implant. So a very basic sort of physics-type of accelerator setup. We go on to slide 7. What I just showed was the practical aspects. Let's talk a little bit-- we won't talk about the physics of stopping yet, but let's just talk about what profiles look like. I drew a profile up here that I said is somewhat what an ion implant might look like. Let's think about what that might be. Imagine we have a surface of a silicon wafer. So the silicon is on the right here-- this is the interface with the vacuum-- and on the left is a vacuum. And you have a beam of ions coming, in this case, right in this red spot. Any given ion that comes in is going to come in and experience a series of collisions. And in fact, many of them are billiard ball-like. It's going to have a series of collisions with atoms in the substrate. You can see it's having little collisions here and there. And it loses energy as it goes in via these processes and eventually comes to rest at some point here, shown by this little blue dot. So there's a couple of pieces of terminology. We call the total range of this ion as the total path length. It's the sum of all these little lengths. That's the total length it traversed. The total length is not as interesting. What we usually care about is the depth. So the projected range, that's the projection of its total length along the normal to the surface. That's what we care about, how deep did it go. Usually, projected range is very near the peak of the distribution. So the things you need to remember are, ion implantation is a random process. Ions come in. A whole bunch of them come in, and they randomly hit various things and lose energy and end up in some distribution. So it's a statistical type of process. The ions are pretty high in energy. We're talking about 1 kilovolt at the lowest probably up to maybe a megavolt. That's the range of energies, believe it or not, that people use in making modern devices. They can even get as high as a megavolt. These hit the silicon substrate, and they lose energy. I already mentioned about these nuclear billiard ball-like collisions, just like on a pool table. There's also electronic drag forces. So there's two mechanisms, and in the next lecture, we'll talk in great detail about the physics of nuclear collisions and the physics of this electronic drag. Just for now, take it as a given that there are these forces, these things that slow the ion down so they eventually end up coming to rest in the crystal. So if we go on to slide 8, let's take a look at a plot of the distribution of ions implanted into silicon. Here's an example where the beam energy is 200 kilovolts. So you adjust the beam energy so that you have 200 keV ions coming in. And this is a plot of the concentration of the dopant in atoms per cubic centimeter as a function of depth into the crystal, into silicon. And we're showing several different types of ions. So if you look at this blue curve here, this is antimony. Notice the antimony didn't go very far. It ended up piling up, looks like a Gaussian-like distribution, pretty close to the surface. The black is phosphorus. It went a little further. Arsenic, I'm sorry. The black is arsenic. Phosphorus here is the red. It goes further. And the one that goes the furthest here is boron. That's shown in green. Boron is the lightest of all those elements, so it kind of makes sense. Boron is lighter. It's going to lose less energy in its collisions, so it's going to travel further overall. And not only does it travel further into the crystal, its projected range here is about 0.5 or 0.55. It's also broader. Its distribution is very broad. Look how broad this is compared to the antimony, which has a very, very tight distribution. So as a physical process, it depends very much on the mass. We often describe profiles very roughly by a Gaussian distribution. It's not really Gaussian, but for a heavy ion like antimony, it turns out to be reasonably accurate. So the first-order description is a Gaussian, and a Gaussian has two variables. One is the projected range, Rp, which gives the peak point in depth of the Gaussian, and the standard deviation, delta Rp, which for a Gaussian, is a measure of the half-width basically. So for a simple Gaussian, we only need two parameters. And in fact, you can write-- on slide 9, I've written down the equation for a simple Gaussian profile with a certain peak concentration. And this Gaussian is centered at Rp. You can see that just by plugging in x equals Rp, this whole thing goes e to the 0. So you get the peak concentration at x equals Rp, and it has a standard deviation, delta Rp. The integral of this, if you integrate the Gaussian, that Gaussian's is integral is known analytically. So the integrated dose is just square root of 2 pi times delta Rp times the p concentration. So this tells you, if I told you I implant antimony at a particular dose and energy, you can look up-- there are tables which you can look up the delta Rp, and then you can calculate, knowing that if I give you the dose, and you know delta Rp, you can calculate the peak concentration right away if you assumed it was a Gaussian. So you can do some very simple things with simple hand calculation. Again, what is q? It's the dose. It's the integrated number of ions per square centimeter. So literally, in this profile, if I took the area under this curve, under the concentration versus depth, that's the dose the number of ions per square centimeter. And again, we control that by measuring the integrated beam current. We'll say a little bit more about this. But if I go back briefly to slide 8, you notice some of these-- the antimony is quite symmetrical and quite Gaussian-looking. The boron profile is not so symmetrical. It's skewed. It has a larger sort of tail region or region towards the surface. It's got the skewness to it, and we'll talk a little bit more about why that is and how to model that. So it turns out the projected range, Rp, and delta Rp, the standard deviation, of all the common dopants in randomly oriented silicon, so assuming the silicon is amorphous, and there's no channeling going on-- we'll talk about channeling later-- these have all been measured experimentally. They've also been calculated very accurately from first principles. And these theoretical calculations can be done for almost any combination of ion and substrate. So you could be ion implanting cesium into carbon. And if you know the basic mass and the number of atoms per square or cubic centimeter in the substrate and the energy, you can calculate what the distribution looks like in an approximate sort of-- if you're assuming you have a Gaussian. So all of this has been calculated and measured. In fact, it's tabulated in a number of books. And it's in your textbook. Rather than tables, what your textbook does-- I took this directly from the text-- it gives you plots. So on the upper left here, this is a plot of Rp. So this is the depth in microns that the ion travels. It's the peak, Rp, as a function of the energy of the implant for the common dopants. So here's boron. Again, you can see boron goes very far. So if I'm at an energy, say, of 80 kilovolts, and I have boron, the projected range is a quarter micron. That's 0.25. For arsenic, at that same voltage or that same energy, rather, it's about 0.05 microns or about 500 angstroms. So it's much, much shallower. And so you can see Rp has been tabbed, has been is given by these straight lines here. And the standard deviation, delta Rp, as a function of energy is also given. So again, boron at 80 kilovolts in the lower right, it has a standard deviation of 0.08 microns. So it's going to be much broader than arsenic, which has a standard deviation of only 0.02 roughly. So you can read these right off the plot. And if you make the Gaussian assumption, all you need are these plots and a simple hand calculator, and you could draw the implant profile roughly for any of these species at a given energy. Now, these also assume that the substrate has been tilted and rotated-- and we'll talk a little bit about this-- to avoid channeling. So this is assuming the substrate is amorphous. So let's go on to slide 11 for a minute, and let's just think about what happens. So that's the distribution. Now, we want to think a little bit about what happens if I don't scan the beam across the wafer. I was saying, we take this laser beam in a real implanter, we're continuously scanning it to randomize and to implant the same dose everywhere on the surface. But if we just hold the ion beam at one point, and just look-- let's look at the case where the implant beam is centered on a point in space, x, y, and z, at the 0, 0, 0 position. So if I draw axis x going this way, y going vertically, and z going out of the board or out of the page, at the 0, 0, 0 position, I imagine bringing in ion beam and just holding it there and then looking at this cloud of ions and where they end up, where they stop. And in fact, people have done this on the computer. It's not that hard to do a simulation called a Monte Carlo. Monte Carlo is just like gambling, just like in-- you send in a certain random distribution of ions. They have known energy but different impact angles, and you simulate these random trajectories for a group of ions. Maybe you could ion implant 10,000 ions at some spot in the wafer. And you see where they end up, what their three-dimensional distribution looks like. So this particular simulation is what 1,000 atoms of phosphorus that were shot into a silicon wafer at 35 kiloelectron volts. And that's what the cloud looks like. That's where they end up. And in fact, it's kind of an elongated ellipse in the x direction. x is the implant direction, and because a lot of the high energy ions undergo very small angle collisions. So they don't end up going sideways very much. They end up scattering forward and then eventually stopping when they lose all their energy. Let's go on to page 12. Now, I'm showing some two-dimensional projections. What I showed before was a three-dimensional picture of this elongated ellipse, this cloud of where all the ions ended up. Now, I'm projecting that cloud onto the yz plane here. So this is along the beam direction if you're looking straight. The beam is coming straight in here like this, right in the center. Along the beam direction, it has sort of a lateral distribution that's kind of symmetric, which makes sense. If we look at the top view, so looking top-down, and the beam is coming in here right at this point, y, in fact, it looks-- you can see the depth distribution. It's kind of peaking right around here at this point, and then it's kind of tailing off as you go in deeper. So you can approximate it by a Gaussian. If we guessed, the center is somewhere around here, around 50 nanometers. That's Rp. And delta Rp is about 20 nanometers. So that's the depth distribution. The lateral distribution is pretty Gaussian, and it has a certain straggle, what we call delta Rp perpendicular. That is in perpendicular to the beam direction. So we have two straddles to really think about or two standard deviations. There's the one in the beam direction, so in depth, and then there's the one perpendicular to that sort of in width. That's the lateral straggle. Now, I just showed that to give you an example, physically get a handle on what's going on at a given spot. But we don't usually just hold the beam on one spot of the wafer. As I said, we usually-- what you do is you randomly scan the beam across the entire wafer. If you want to mask at given region, you implant through a mask into a window. And the surrounding areas on either side of the window will be masked from the implant. For example, just showing here on the lower right, I've imagined I put down a mask. It could be photoresist. It could be oxide. Somehow, I've patterned the mask, and I've opened up a window. Now, when I do the ion implant, of course I implant across the whole wafer, so I'm implanting across everything here coming in. But the ions stop in the mask. The only ones that make it into the substrate in this particular example are the ones that come through. Only in this window area does the substrate get implanted. So in this course, we're interested not just in the depth distribution, but in the lateral distribution of the profile. So I care about this window. I'm interested to know not only how deep do the ions go, but how do they spread laterally. If I'm making the source and drain, I care about how they spread laterally into the channel. That's an important thing to note. So in the lateral direction, of course, measuring the profile is hard. We saw last time two-dimensional profiles are very difficult to measure. But we actually mathematically model them by assuming that the profile is composed of a product of a vertical and a lateral distribution. So in terms of the x and y, let's say in this particular example that the x is depth. OK, going deep into the wafer, into the crystal. And the y is the lateral dimension. Then we can approximate the concentration of xy distribution as being some vertical distribution, be it Gaussian or whatever we'll talk about, times a lateral Gaussian. You recognize this as a Gaussian function, e to the minus y squared over 2 delta r perpendicular squared. OK, so it's a Gaussian with a standard deviation, delta r perpendicular, in the y or the lateral direction. So I've written over here on the left what we call a point response function. If I just shoot the beam in at one point, you can see what that looks like. And when you go to do this simulation that's being done at SUPREM-IV, what it's doing is it's taking this point response function and then superimposing that response function across the whole window. And then it can give you, then, laterally what the distribution looks like, not only in the lateral direction, but in the depth direction. In fact, if you do this properly, and SUPREM-IV does it for you, it leads to some kind of lateral distribution under the mask edge. So that's just to remind you-- I haven't told you anything about the physics of stopping-- just to remind you how these profiles vary in x and y. So let's go on to slide 14 and think about what happens at a mask edge. And this is very important because you're making a MOSFET, because you're defining the source and drain extensions with some kind of mask. Let's assume the mask is thick enough to block the implant, so the mask might be the gate. You have the polysilicon gate of the MOSFET, and you want to implant everywhere around it so it blocks the implant from hitting the substrate. So the lateral profile under the mask is going to be determined by what we call this delta Rp perpendicular, this lateral straggle. And this is an example of looking at two different implants. Here on the left, this is a 35 keV implant of arsenic and a 120 keV implant at the edge of a poly gate. So basically, over here on the right, see this, all these different grains? That's the polycrystalline silicon. That's the gate. This is the gate oxide underneath the gate. And this is the region to the left of the gate, which we often think of as the source. And you can see this contour defines sort of where the-- the edge of where the arsenic ended up. In fact, this is a cross-section TEM, and we showed something like this last time. This was done by Transmission Electron Microscopy with staining of the junction area. So they actually take the thin TEM sample, they dip it in acid, and the acid preferentially etches and makes very thin the regions that are very heavily doped with arsenic. So this is how you can see, with your eyes or with a microscope, how you can see where the junction ends up. And you can see this profile right here underneath the mask edge or under the gate, this distance from here to here is dominated by this lateral straggle of the implant, depending on how much annealing you do afterwards. So as we decrease the device geometry, as we make this gate length shorter and shorter, the implant straggle under the mask becomes a more dominant effect on determining the actual channel length. Knowing the lateral straggle is an important feature of MOSFETs. They not only determine the channel length, but remember, we gave examples that the shape of that lateral profile also determines some of the short channel effects, like VT roll-off and things like that. OK. Again, covering some of the more practical issues before we talk about the fundamental physics, on slide 15, I've mentioned a couple times this idea of masking and implant. OK? So what do I mean about that? And how would you calculate how thick a mask you might need? So this is an example of masking an implant. I'm doing an implant, and again, the beam is rastered over the entire wafer, but I only want to implant certain regions, maybe only the P-MOS regions or N-MOS regions. So I have to put some photoresist over a certain region. And the question is, how thick does that material have to be? If it's for resist or if it's nitride or silicon dioxide, how thick does it have to be? Well, it turns out that a dense material, has a high density, can be a lot thinner than a light material, because a light material allows the beam to go in deeper. So here's an example. I'm showing you a plot here, concentration on the vertical axis as a function of depth. And let's say, this is what it looks like. This profile right here is what the ion distribution looks like. And this depth right here marked x sub m, that's the thickness of the mask. And you can see what I've done here. Actually, I've chosen a mask that was too thin sort of intentionally. What happens is all this distribution got implanted into the mask, and a little bit of its tail right here, which is hatched by this red hatch region, a little bit this tail actually got into the silicon. Generally not desired, but just to show you as an example. So the concentration, c star, at xm gives you an idea of the concentration that will now be at the surface of the silicon. And this whole dose is what got into the silicon. So for masking, what we do is we calculate the concentration profile in the mask material. And the reason we have the star here, and I know that we use star in this course to mean a lot of different things, but this star means you're in the mask material. So whatever that material is, you have to use the projected range, Rp star, of boron in that material or the standard deviation, delta Rp star, of boron in that material in this equation. So this is just like a regular Gaussian. The only difference I've done is I've got this represented as Rp star and delta Rp star. So what I want is this concentration, c star, at this point to be less than some background concentration, whatever it might be. Let's say, your wafer is doped 10 to the 14th per cubic centimeter. You want to make sure that the concentration profile in the mask is such that by the time that the profile gets to the edge of the mask, it's down below 10 of the 14 so you really can't see it. That means you have a good mask. You've designed it to be thick enough. If it's thinner, then you get too much of the material, of the species is implanted into the wafer itself. It's not a good mask. So let's look at slide 16. And in fact, here what we want to calculate is-- we'll calculate the area of this red region. That's the dose that penetrated the mask and made it into the silicon or into the material below. So we can actually calculate the required mask thickness from this equation that I just showed back on slide 15. I'm going to set this concentration equal to the background concentration, whatever it is in the wafer, solve for that, and that gives me the mask thickness, xm, in terms of Rp star in the material and the standard deviation, delta Rp. So basically, it says the mask should be as thick as Rp, obviously, plus some multiple m of the standard deviation. Typically, m might be five standard deviations of thickness or something like that. And so the dose that penetrates the mask, then, I can just substitute that in. All you need to do is integrate this little red region. It's given by this. It turns out to be a complementary error function. So you can actually figure out how much you made in into silicon. Generally, design the mask so that it doesn't make it in. And there's a calculation on page 457 of your text, the hand calculation of this type of thing to give you an idea. A lot of times, people use hand calculations or they use SUPREM-IV to figure out, OK, if I have a certain thickness, I have a half micron of oxide, how far is boron going to go into that? And is it thick enough for me to mask my implant? So that's masking in terms of the vertical direction. How about in the lateral direction? This is a little bit more tricky, and you just have to think here intuitively. This is on slide 17. Implants are, by the way, rarely done perfectly vertically. If this is the wafer, it's very rare that the incoming beam is at 90 degrees. And we'll talk in a little bit about why that is. But it turns out a lot of implants, either for device design reasons or whatever, are done at some tilt angle. The beam is like this. You actually tilt the wafer. And in fact, halo implants are almost always done at some kind of tilt. And the reason you tilt it in is because you want to, in this case-- this is the gate, and I'm trying to do an implant, and I'm trying to get the ions to sneak a little bit underneath the gate a certain distance, and use the implant to get the ions underneath there. So in that case, I'll have to take this tilt into account when I'm calculating the profile. And in fact, you can get shadowing effects. This is an example of a 50 KB phosphorus implant, and this was tilted at a very high angle, say, 30 degrees. And this red region, which represents the gate, the polysilicon gate, it ends up shadowing the beam. So you end up with a little space here where the shadow was, so that did not get ion implanted. So if I want to reproduce a symmetric profile on the source side and the drain side, I then need to tilt the wafer the other way and come in at 30 degrees to the normal in this direction for symmetry. So if you're doing tilted implants, you have to think about how your device is oriented with respect to the tilt direction. And maybe do some other implants to symmetrize what's going on. And again, that's a geometrical effect. It's relatively easy to simulate. This is a SUPREM-IV simulation. You just have to keep track of that. OK, the next concept I want to introduce is thinking a little bit about, how does this Gaussian-- let's say, you implant a Gaussian profile. OK, here on slide 18. How does that profile evolve when I'm going to have to heat the wafer to get rid of the damage? How is it going to evolve? Well, we first do a very, very simple back of the envelope type of calculation. And let's look at a Gaussian implant distribution. This is what I just-- the equation 1 in what we just put down for a Gaussian implant. It's characterized by this Rp, projected range, and a standard deviation, delta Rp. Now, if we go back to a prior handout, 14, when we were talking about diffusion of Gaussians, we actually wrote it like this, the concentration can be written in terms of. And look at the denominator here. You got 4 dt in it as opposed to 2 delta Rp squared. So in fact, these are both Gaussian functions. They are mathematically equivalent, and I can make them equal if I just set delta Rp equal to the square root of dt. I can equalize these two arguments. So we can then write down an equation, a very simple equation for the impacts. If I do an implant, and you assume it's Gaussian, and then I anneal it in a case where it's non-concentration-dependent, I expect the final profile to remain Gaussian. So a Gaussian remains a Gaussian, preserves its shape upon further annealing. And in fact, I can just write down by inspection what should happen to that. If you start out with this distribution here-- this is the ion implant distribution-- what I replaced for the argument in the denominator of the exponential, instead of having just this for 4 dt, I've got a new standard deviation, which is delta Rp squared, plus that essentially. And again, where we see the square root of dt, you can also write that out here in the pre-exponential in terms of delta Rp and dt. For the very simplest case, you have to do a hand calculation. We know how a Gaussian implant profile will evolve. We're going to go, of course, beyond that in the next few lectures, talk in more detail. Here's an example on slide 19 of just those equations. So this is assumed to be as implanted, the red, assuming a very simple Gaussian distribution with a certain standard deviation, delta Rp. And after it's gone through diffusion, it's shown here in the blue. The peak has dropped, and now it has a new standard deviation, which is delta Rp plus the square root of dt. So this gives us an idea of how we can evolve an implanted Gaussian during a diffusion step. So let's go on to slide 20 and talk about a real profile. Real profiles are not perfectly Gaussian. In some cases, they're more Gaussian than others, but they're generally more complex. If we look at the profile for a light ion, and light means very light compared to silicon, say, boron, these light ions tend to backscatter. Imagine you have a big, heavy bowling ball, and you're throwing a ping pong ball at it. That's the boron. It's a ping pong ball. Well, a lot of those boron atoms will ping right back, right? A few of them will scatter in forward, but a lot of them can come right back. And so you can see that if you look at a boron distribution, towards the surface, it's actually skewed. The concentration profile is a little higher towards the surface, as if a lot of the boron ions came in and just got pinged right back. So this blue distribution that represents boron at 38 keV, it's skewed towards the surface. And it relates to the fundamental physics of the scattering. Heavy ions like antimony, they actually tend to scatter deeper. They're heavier than silicon. They can knock the silicon forward, and they themselves retain a lot of forward momentum because they're like a truck. If the truck hits a little Subaru, kind of tends to go right through the Subaru. So the heavy ions like antimony have this skewed-- or skew deeper into the substrate. The interesting thing about these two protons, so this is antimony at 360 keV. So we adjusted the energy to give them the same projected range, so you notice they peaked at the same point. 38 keV peaks at the same point as 360 keV antimony, but they have very different skewness. A Gaussian distribution by definition is not skewed, right? A Gaussian, by definition, is perfectly symmetric. That's what the equation says. So if we're going to represent this, we need a little more complicated mathematical function than a Gaussian. And we'll talk a little bit about that type of function. So here's an example just on slide 21 of some different energy boron implants. So I have concentration of boron versus depth. And here's a 50 keV implant. Interestingly, it looks pretty symmetrical. It's fairly Gaussian. 100 keV. But as I'm going to higher and higher energies, it's becoming less and less symmetrical. So just looking at this black curve right here at 500 keV, look how skewed that is towards the surface. Whereas, if I had used the Gaussian approximation that you can do with a simple hand calculator, this dashed line is what you would have gotten for the profile. So the Gaussian is somewhat accurate. It gives you a rough idea of where the peak is, but it doesn't represent this forward scattering of the skewness. In fact, these black profiles are called something called Pearson IV distributions. Those are the solid lines in that plot. And Pearson IV is one of the most popular. It's a statistical function. It's one of the most popular types of profiles to use to accurately represent distributions, range distributions for ion-implanted atoms in silicon. They work really well. Pearson IV works really well as long, as you don't have ion channeling. It does an excellent job of describing the ion implant profiles. As the name suggests, Pearson IV needs four variables. It has four moments, so to speak. Gaussian only has two, so naturally the Gaussian is more simple. In fact, on slide 22, just reminding you or introducing you to the idea that an arbitrary statistical distribution can be described by a series of moments. And here are the first four moments of an arbitrary distribution. The first two, we've already talked about. The first moment is called the projected range, Rp. And mathematically, you can derive it or calculate it by-- it's the first moment, so it's the integral over all space of x of x times the concentration, c, of x. So you take the weighted integral, and that is the definition of Rp. You may have seen some of these types of concepts in statistical or statistics classes. The standard deviation is the second moment. It's actually the integral of x minus Rp quantity squared times the constant concentration, that whole thing divided by the integrated dose. Remember, the integrated dose, q. Just remember that q here is equal to the integral of the concentration-- overall space, dx. So you're always normalizing this. So the first moment and the second moment, they're used, of course, in the Gaussian distribution. The third and fourth moments are used in the Pearson IV. One is called the skewness. Again, that's x minus Rp cubed, the weighted integral of that, divided by q times delta Rp cubed. And the kurtosis is defined in terms of the fourth-- the x minus Rp to the fourth power. And this skewness and kurtosis, as they've been defined here, they're actually gamma and beta are dimensionless, because you notice we're dividing by delta Rp cubed here and delta Rp to the fourth. Just so you know, if you run SUPREM-IV, you won't have to do it. But if you were stuck on a desert island, and all you had was an HP calculator, you could actually-- and your life depended on it-- you could actually calculate the Pearson IV distribution from the simple differential equation. It's the solution f. The Pearson IV is a solution f to this differential equation. So the differential equation says df by dxn, where xn is this variable here, x minus Rp, is equal to this function over here, is equal to xn minus a quantity times f divided by this polynomial. So it turns out this differential equation can be resolved relatively simply using a numerical method if you know these moments, so if these numbers, a, b0, b1, and b2. And each one of these variables, like a, a is defined in terms of the skewness, the kurtosis, and of course, delta Rp, the standard deviation. b is related, as shown here. So all these constants are defined in terms of the moments of the Pearson IV. It turns out that if the skewness, gamma, for most dopants in silicon, you can actually estimate the kurtosis, beta, from the simple little analytical formula. So in fact, they're related this way. So really, for most dopants, there's only three independent moments, Rp, delta Rp, and beta-- and gamma, because you can calculate beta from gamma. If you just look at this equation just for a moment-- you don't to solve it-- but just looking at it, what are some properties of this function? Well, the peak, where does the peak occur? Well, the peak occurs wherever a function is set equal to 0. So the peak occurs where this-- or the derivative of the function is 0. So it actually occurs at xn minus a, which is Rp plus a. So it doesn't occur directly at Rp. Unlike the Gaussian, the peak of this occurs just slightly to the left or the right of Rp. So when the skewness, gamma, is 0-- so when gamma is 0, f simplifies to a Gaussian. So if I just take this and a goes to 0, then I'm going to get a simple Gaussian. There's no skewness. If the skewness is negative, so gamma is negative, like in the case of boron, a is going to be positive. So if gamma is negative, a positive quantity, and the peak of the distribution occurs deeper than Rp. So for boron, if you're careful about calculating it in Pearson IV, the peak will just be slightly deeper than Rp. It'll be Rp plus a. When the skewness is positive, like for arsenic, a heavy atom like arsenic or antimony, the peak is going to be slightly shallower than Rp. So again, slightly more sophisticated than a Gaussian is the Pearson IV. And that's how it's actually calculated by the computer. So these four moments that we've been talking about, they've actually been tabulated. I took this particular table I copied out of a 1980 book edited by Gibbons on ion implantation. And it's just showing as a function of energy and a couple different energies, 20 50, 100, and 200 keV. For three common dopants, it's showing Rp, sigma p-- now, sigma p is the same as delta Rp. It's just a different notation-- the skewness, gamma, and sigma p perpendicular, or delta Rp perpendicular. So just looking at boron, again, its skewness is negative, right? All of these gamma numbers are negative. That's why it's skewed towards the surface. And so you can look up in tables like this, and if you have your calculator, you can do the Pearson IV equation, and you can calculate what the profile should look like. OK, so that gives you some idea of how engineers use these numbers. Now, let's go on to slide 25, and let's talk about implants in real silicon. Well, to be a little more careful, real silicon is single crystal, right? We know it has the diamond cubic structure, and it has a regularity to its structure. And in fact, I'm hoping not to make anybody car sick here. But be careful, don't shake your head too fast. But this is a picture of looking down the 1,1,0 direction, the axial channels in crystalline silicon. What do you see in this picture? What you see, well, you see a very regular pattern of atoms, and you see big openings, wide open spaces where there aren't-- where the atoms at the surface block the atoms below. So you can imagine being an ion going down this crystal. And if you're aligned, if your energy or your momentum is aligned just in the right direction, you can channel right down some of those channels. And you can imagine going down there without losing much energy, without banging into a lot of the silicon atoms because of this regular crystal structure. And in fact, this is a two-dimensional representation, a cartoon up here of what I'm talking about. Imagine these red atoms are silicon. They're all lined up in a row, and my incoming ion comes in just pointing down that channel. It can just barely skim the edges of the channel, have very few small angle collisions, and be steered a long way before it stops. That's very, very different than going into a random material where there are silicon atoms all over this page. So ion channeling, because the material is single crystal, is really going to complicate all those equations I just talked about, the Gaussian, Pearson IV. They're all assuming that the atoms of the crystal were randomly oriented. They didn't say anything about going down a channel. The physics of channeling is quite a bit different from what we've just been talking about. So channeling is good and bad. Mostly, it's bad because it can produce some really unexpected results. As you can imagine, if this is your crystal, and you're looking down the 1,0,0 channel axis-- so these are all the open spaces. If you happen to be aligned, the beam, right to that axis, you can get profiles that go a lot deeper than you would have expected. So this is an example of implanting phosphorus at 40 keV into silicon. And here's the concentration of phosphorus as a function of depth. And in fact, looking at different doses, the very low dose here is about 1 times 10 to the 13th. Look how deep the phosphorus went, all the way down to-- I don't know. It's got the shoulder here. Goes down to about 0.6 or 0.8 microns. As we go higher in dose, notice this. The very highest dose 7 times 10 to the 14th, you don't see that channeling tail. Why would that be? Does anybody have any idea why this very high dose implant didn't go in real deeply, whereas the low dose one did? [INAUDIBLE] OK. So you have a high dose is coming in. So what is that dose going to do to that beautiful crystal, all those beautiful channels? Smash them up. Basically, just imagine taking that, just throwing a bunch of baseballs at that thing. And pretty soon after you've thrown enough baseballs, that perfectly nice-looking channel, there's a whole bunch of atoms in it because you completely randomized the crystal. And that's called amorphization. So when you put in enough dose, you throw enough baseballs at this thing that you randomly-- now, instead of having a beautiful crystal structure, the silicon is completely amorphous. There is no structure left. Then you are going in, and now this implant was essentially done, most of it, into a material that didn't look like this. It just looks like an amorphous mess. And that's why you don't get this channeling tail. This implant was such a low dose, it barely did damage the crystal at all. The crystal maintained its nice structure, and so the ions were able to channel in and go very deeply. So this is bad because it turns out it's very hard to control what this tail looks like. And a lot of times, you don't want such a deep profile. So in fact, in this case, channeling is viewed as something one wants to avoid. Actually on page 28, here's another example, just as we just saw, of boron. Boron is a famous ion for doing a lot of channeling. It tends to be steered very easily. One problem with ion channeling is that the profile shapes don't scale with those because, again, a low dose doesn't damage the profile much. A high dose here, like 2E15, does a lot of damage. And so this green profile is not simply equal to the 2E13 profile, bumped up by 2 orders of magnitude. That's what you think it would be. But in fact, it's got a different shape because as you were doing the implant, you're doing a certain amount of damage, and you're changing the appearance of the crystal and channels. So there's all kinds of issues when you're trying to model ion channeling. Page 28 actually has a little bit more information about ion channeling. There is something in ion channeling called the critical angle. What is the critical angle, psi? Well, it's the maximum angle. It's the largest angle of the incoming beam with respect to the axis of the channel before the steering action of the row is lost. So at this angle, you can just barely come in and be steered and go in and out and in and out. And if you have a slightly higher angle, what's going to happen is you're going to get knocked out of the channel. So that critical angle is important to know. It tells you how well does the beam have to be aligned to the channel to be sure that you get this channeling effect. And psi, see here what's called the critical angle, in fact the reference above has a little bit of information on how to calculate it. And these are some examples of critical angles. Say, along the 1,1,0 direction for boron is 3 degrees. Arsenic is about 4.4. So it gives you an idea, if this is my crystal and I'm coming in, well, if I'm within 3 or 4 degrees of that axis, I'm going to get a lot of channeling. If I tilt the crystal, so I'm outside 3 or 4 degrees-- say, I'm at 7 degree tilt-- most of the ions will come in, and they won't actually be aligned with the channel. So you won't get much channeling. So it kind of gives you an idea of how much accuracy, when you're shooting your ions in, do you have to have to enable channeling. As we see here on slide 20, people do something called controlled misalignment of the crystal when they put the wafers in the ion implanter. And they don't put them in like this. They try not to put them in so you're looking straight down the 1,0,0 axis. What they do is, they take the wafer and they put it on the plate, and they tilt the plate a little bit with respect to the beam. In fact, typical tilts are about 7 degrees. And they actually also rotate it. So if I've just taken this crystal here, and in the computer, they tilted the crystal and rotate it. And look, lo and behold, a perfectly crystalline material all of a sudden looks somewhat random to the eye. And so it looks somewhat random to the incoming beam. You see the channels. A lot of them have now been blocked by the tilting and rotation. And when it looks random, that's good. That means you're not going to have a lot of channeling. You can never get rid of it all together. You can minimize it. Because the problem is, as I come in here, it looks pretty random, but then I might get knocked. Maybe I get knocked by 7 degrees and rotated some of the ions, and they get knocked back into a channel. So you can scatter incident ions back into channeling directions in their first few collisions. So you can't get rid of it, but you can minimize it. And ions that are not channeled early in the process, as they get closer to their end of range, as they lose energy, the critical angle increases. In fact, I didn't mention that. If I go back one page to slide 28, this was [? Linhart's ?] expression for the critical angle. It depends on some constant, e1, divided by e. So this psi goes up. As I get slower and slower, then the critical angle goes up, so the probability of channeling is going to be bigger at the tail of a profile. In fact, you see that on slide 30. That's very evident when we actually look at a profile that was ion implanted, where we tried to eliminate channeling by tilting the profile, tilting the wafer by 7 degrees off the 1,1,1 axis. So this is concentration versus depth, and there's a couple of different profiles here. In fact, what people were trying to do was, they were also trying to cover the wafer with an amorphous material. People thought, oh, well, if I-- why don't I put a layer of something amorphous? That will prevent channeling, right? Because I'm covering up the crystal with these amorphous material. But in fact, it doesn't really avoid channeling. Putting on a little bit of nitride doesn't really help. You still get this so-called exponential channeling tail. You see, all four profiles have this exponential tail. That's due to channeling. As the ions get to their end of range, these ions down here have lost a lot of their energy. When they get to this point, the critical angle for channeling goes up because it's much more likely they get steered into a channel. And oops, they go a little further. So this tail is called residual channeling. Even an amorphous overlayer will not eliminate it, and tilting the sample will not eliminate. So this tail does need to be modeled if you want to get a very accurate simulation of what your profile really looks like. So let me just summarize a little bit about-- so far, I haven't talked about the physics of scattering or of stopping or whatever. But we've just talked about how one analytically might treat ion implant profiles. We have a Gaussian distribution, which has two coefficients. It does an OK job. It's not very good, but it's good. It's OK. Pearson IV has four coefficients. It's very good for amorphous, but it doesn't include channeling. If I take Pearson IV, and I add an exponential tail, so now I need six coefficients, I can include channeling. And it works in some cases. There's something called a dual Pearson, which we'll talk about. That's two Pearson profiles superimposed, has nine coefficients. So look, as I'm going from top to bottom, I'm adding more and more free parameters. Of course, you can model anything. You can model an elephant with enough free parameters. Legendre polynomials have 19 coefficients. That's getting kind of ridiculous. But there are tables of a lot of these coefficients that are built into textbooks or into SUPREM-IV. And they have a lot of these coefficients as a function of mass, energy, dose, tilt angle, all that. These analytic calculations are very fast. They're very efficient. But you have to be very careful when you're using them because the coefficients for dual Pearson or coefficients for this Legendre polynomials, whatever it is, a lot of times people are just fitting experimental data, which may have artifacts of its own. Remember, when we talked about SIMS, we talked about a knock-on process. So SIMS itself will tend to produce tails, just from the secondary ion mass spectrometers, from the analytical technique I used. So you have to be sure that somebody wasn't actually modeling an experimental artifact. Anything based on tables, you have to take with a grain of salt and just be a little careful. Check it out. Slide 32 actually just shows you some SUPREM-IV, four different models of varying complexity for modeling a particular implant. These are concentration contours. Each color or each grayscale represents a different concentration of the dopant. This is a polysilicon mask edge, so none of the dopant can make it into the mask over here. And it only is implanted on the right-hand side. So it's particularly tricky to get the right profile. So this particular model, upper left, is assuming you're going into amorphous silicon, so it has a very simple shape. This is assuming the dual Pearson type of model, so it's going to have some skewness. This particular profile is dual Pearson, where there are extra coefficients that take into account tilt of the wafer and rotation. You can see it looks quite different from that. All three of these here in the upper left corner, these three are what we call analytic. They're based on these equations that we've been talking about. Monte Carlo model is quite different, and we'll talk about that next time. Monte Carlo, literally in the computer, you shoot ions into the crystal, and you follow the scattering processes. And you will see where all those ions end up. A computer can do that. It can sit there all day, and it can calculate the trajectories for 10,000 ions and then plot their statistical distribution. So Monte Carlo tends to be quite accurate if you have enough physics in. Unfortunately, it takes a long time because the computer has to go in there and follow every ion, see where it ended up. Here's an example, actually, a few words about Monte Carlo. What you do is, you integrate the equation of motion for a representative ion from the point when it hits the surface of the silicon until the point where it comes to a rest, where it stops. And you repeat this process. You follow the ions for as many times as you can, using random starting impact parameters. And there is this so-called law of large numbers that says, the error, the scatter in what you're going to see, goes like 1 over the square root of n. So if you want three decades of a profile to be accurate, then you need-- with less than 10% error-- you need 10 to the 5th ions. So 10,000 ions, or 100,000 ions, rather, is about the minimum that you need to get a nice smooth profile. So that's going to take a while. You can try on some of your homework, so you may end up trying a Monte Carlo simulation. You could see. It could take half an hour, it could take overnight simulating one implant, depending on how many ions you need to follow. This is an example of some Monte Carlo simulations that were published back in 1997 for-- here, they're doing relatively low-energy boron. And get that here, 0.5 keV, 1 keV. And you can see the statistical nature of these profiles. The simulations are these little boxed sort of digitized looking things because they're actually following individual ions and trying to build up a distribution based on that. So the nice thing is you can get reasonable accuracy. The bad thing is it could take you days if you're trying to fit a profile, literally days to do that. And before we finish up, I wanted to mention there's one other type of calculation that's not very popular but was developed a while back and actually is kind of a compromise between Monte Carlo and analytic. It's something called the Boltzmann Transport Equation, or the BTE, developed by Giles. And he was able to write down a Boltzmann transport equation and then model an ion distribution. Instead of following individual ions, he was following the distribution, the momentum and the energy distribution of these ions. So there's economy of scale in doing that. And he actually followed the distribution as it changed and as it evolves in the computer. I won't go into the detail, but basically by following distributions instead of following ions, it's much faster than Monte Carlo. But it can have as good accuracy. Unfortunately, it never became very popular, so I don't think it's in SUPREM-IV. There may be some versions of simulators out there out in the world, though, you may come upon that have BTE method for ion implant. I think it's actually a very good method. It's just too bad it didn't catch on in a big way. So on slide 35, let me summarize. We saw the ion implant is the preferred process for introducing dopants. It has excellent control. It's very reproducible and uniform. The key parameters, energy of the incoming beam, the dose, the tilt of the wafer with respect to the beam, the rotation of the axis of the wafer, whether there are any masking materials on the wafer. Controlling the temperature of the wafer during the implant is actually somewhat important because it'll control the damage to a certain extent that gets done to the crystal. The dose rate, the beam current, how quickly are you putting it in. Actually, the dose rate determines how much you heat the wafer up, and that's important. So in practice, there are a number of key parameters. There are tables for the first three moments, Rp, delta Rp, and skewness, that you can get out of the literature. The most popular analytic equation to use or formulation is the Pearson IV for nonchanneling. If you want to model channeling, you need more parameters than four. Usually, people use dual Pearson and lookup tables. Monte Carlo is slow, but it's probably the most accurate or one of the most accurate. The BTE is another approach you may find in the literature. It's an alternative to Monte Carlo, but it's faster, but not that widely used. What I didn't get to cover today, we'll spend the lecture next time on the detailed physics of these billiard ball collisions, the physics and modeling of this nuclear stopping, these collisions and this electronic drag force. So we can actually see in more detail how the physics of the implant. Today, I just mostly talked about practical aspects. OK, if you came in late, a couple reminders. Pick up this handout number 20. This describes your final term projects. And next time, I'm going to bring a sign-up sheet. You can sign up for the topic you want to use or study, and most importantly, whether you want to do an oral report or a written report. And next time, I'll have the homework. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 10_Dopant_Diffusion_Fermi_Level_Effects_I_and_V_Assisted_Diffusion.txt | JUDY HOYT: Where we were-- where we are. I've been out last week, I went to a conference. So you had two-- your first two lectures on dopant diffusion given by the TA. And here we are, Lecture Number 10, we're going to talk about some more advanced models for dopant diffusion. And last week, you also had your homeworks Number 3 handed out. So hopefully, you've all started on that. You're using SUPREM for your homework. Get used to running SUPREM because that homework is due this Thursday. And then next Tuesday, you'll have another homework, homework Number 4 going out, which is also going to use the process SUPREM IV, and more sophisticated models. OK. So as I mentioned, today's lecture is essentially the third lecture out of a number that are on dopant diffusion and profile measurement. Last couple of lectures, Maggie talked about the relatively, quote, unquote, "simple cases of constant diffusivity." Diffusivity does not change in space, basically, throughout the sample. That applies when you have a low dopant concentration. We'll talk today about what happens when that assumption is broken, such as, when you're diffusing the well of the CMOS. So it's relatively low dopant concentration, say, 10 to the 18th or less. And we also saw-- you saw last time that you can design a diffuse layer based on a sheet resistance requirement. As a device engineer, you might be told you need a diffuse layer, such and such a thickness, and it should have so many ohms per square of sheet resistance. You saw how you could use some of the urban curves, or you can use numerical techniques to design a diffuse layer. Last time, also on Thursday, Maggie introduced a relatively simple but very powerful method for doing numerical solutions. There's only about three or four cases for diffusion that you can solve analytically. And of course, we try to give you some of those in the homework. But most of the time, you have to do it numerically. And this is a very simple technique. It's called the finite difference algorithm. It's slow, it's not actually what's used in SUPREM IV. SUPREM IV is the-- was more sophisticated than that. But if you were stuck on a desert island and all you had was a computer with you, you could write your own finite difference technique in about three or four minutes, and you could solve a diffusion problem like that numerically, even if you didn't have SUPREM IV. Maggie also talked about that you need to modify Fick's law for something called electric field effects, so electric fields that are induced. These can enhance the effective diffusivity of a high concentration species. So you have a high concentration of arsenic, let's say diffusing, it can actually, in a low concentration of boron background, its diffusivity can be enhanced by up to a factor of 2 due to this electric field effect. But even more of an enhancement can be obtained, more than a factor of 2, very dramatic diffusion of those species of lower concentration. So she showed an example with a low boron concentration that was constant at the beginning of the diffuse profile and then diffusing arsenic into it. And then after the diffusion, it actually hadn't moved, even though the profile was flat. That cannot be-- that's non Fickian diffusion. Fick's law would say, once the profile is flat, no diffusion. So that's due to the electric field effect. So that's kind of what we went over-- what you went over last week. Today, I want to cover three items-- concentration dependent diffusion, because this is really prevalent in Silicon, especially, in making MOSFETs-- segregation interfacial dopant pileup, and starting to look at an atomic scale model of dopant diffusion. OK. Let's go on to slide number 2. And this plot introduces so-called Fermi level effects, or concentration dependent diffusion. And it's a cartoon plot, but it shows concentration on a log scale, on the y-axis, versus depth. And there are a couple of solutions here. Look at this one starting at the surface concentration of 10 to the 18th going down here in red. This is a complementary error function, just like we learned a couple of lectures ago when you have a constant surface concentration. If you have a higher surface concentration, say, up in mid 10 of the 20, this red curve here, dotted curve, shows what a complementary error function would look like. Well, what we actually find when we do diffusions of a number of dopants in Silicon, that's not what they look like in high concentrations. They don't look like this red curve. They look more like the blue curve, or the green one. They have a much flatter top, and they have a much more abrupt drop-off. And this tends to occur when the dopant concentration is greater than ni. So n much, much greater than ni. So here's n, the concentration of arsenic-- let's say this is arsenic-- is 2 10 to the 20 up here at this point. And ni is a few 10 to the 18th. So orders of magnitude larger than ni, you see this that the complementary error function does not yield the correct profile, has more box-like than it would be. And by the way here, this little dashed line, horizontal line, shows you where ni is. So clearly over this portion of the profile, n is much, much greater than ni. So at high dopant concentrations we observe for a lot of the dopants in Silicon that the diffusivity appears to be larger than it is at low concentrations. And so what this means is that fixed equations have to be solved numerically since, basically what we're finding, what we see, is that diffusivity is no longer a constant throughout the sample. In fact, it depends upon the local dopant concentration at each point. So at each point along this profile from here, here, here, here, and here, the diffusivity is a slightly different number. And in fact, according to this, for the blue curve is proportional to the ratio of n over ni. And of course, n over ni is changing as I walk down this profile. So there's no way I can do an analytic solution. Every point in space I need to apply a different diffusion coefficient. That's what we mean by not equals to a constant. So in that case, remember this was Fick's second law, when I cannot pull the d out of the partial derivative. So partial d, partial t is equal to the first derivative with respect to x of this product. The diffusivity of the dopant, the effective diffusivity times partial c, partial x. Where now da effective-- I should write here-- is actually a function of x in this high concentration case. So I cannot pull it out of the derivative. I have to solve it most likely numerically. OK. So that's an introduction to what people see. How do we explain this effect? Well, as you might imagine, people explain this dependence on the Fermi level. Basically, it's the dependence on dopant, or Fermi level, based on concentrations of point defects. We saw several chapters ago if I move the Fermi level up and down the band gap, so if I increase the dopant above NI, that I change dramatically. In fact, you had a homework problem that was due last week. You saw when you went to a high concentration that the total concentration of vacancies went up because some of the charged vacancies went up. OK. More vacancies, if you need vacancies for diffusion, make sense the diffusivity would go up. So this-- basically, we saw that charge point defects obey the same statistics as shallow donors and acceptors. In fact, we wrote equations in Chapter 3, we wrote down these equations for the concentration of any charge point defect here. Here's the concentration of vacancies that are single negatively charged. We could write it in terms of the neutral vacancy concentration, which is only a function of temperature times this exponential of e Fermi minus ev minus, OK. So it's the distance between the Fermi level here and this energy-- defect energy level ev minus, that distance, that whole thing to the kt. So as I move the Fermi level up and down, I can exponentially increase or decrease. And we can write the same kind of equations for interstitials. So as you can imagine then, we're going to use this idea to say that the diffusivity, which depends directly on these concentrations, if you're diffusing with point defects, must then depend on the Fermi level, and use that to explain the concentration dependence. So I just want to derive now here on-- shown on slide 4 of your handouts an explicit expression for the charge defect concentration. This time in terms of carrier concentration. On the last slide, if we just go back one second, here it's implicit in here. I'm going to show you the carrier concentration is embedded in here. Right now, you don't see it explicitly, but I want to make an expression where we can see the dependence on n over ni directly. And we'll do this for a particular case and then we'll generalize it. So this is a little bit hard to see, the font size is a little small. But what this is saying is from the last slide we just saw, that the concentration of cv plus is equal to the neutral concentration of vacancy. So the concentration of neutral vacancies times this exponential ev plus minus e Fermi over kt. So that's-- and this is our band diagram. Remember, e sub v here is the valence band position. ec is the conduction band. This-- unfortunately, the notation is a little bit unfortunate. It's awfully similar. But ev plus is the energy level in the band gap of the single positively charged vacancy, OK, it's ev plus. It's not the valence band. The valence band energy is just E sub b for the valence band energy. The mid-gap position here is called ei, which is shown by this dashed line. The Fermi level in this particular example is given right here by ef. So those are all the relevant energies. So I want to now take this expression, this ev plus minus e Fermi and expand it out in terms of some other relevant quantities. So this ev plus minus e Fermi, that's actually just some distance between here and here, from this point here to here. So it's equal to this vector, this a value. But you can also write a as just the sum of d plus c minus b. Just add up d plus c, subtract b, that's equal to the energy distance a. And so you can take each one of these terms, d, c, and b, and write down quantities for them. For example, d, this distance right here, is just ei minus e Fermi, OK? So I can write d like that. d is just the valence band energy ev minus ei, so we've written that. And subtracting off b, b is just ev minus-- ev, the valence band energy minus ev plus. So if you want to look at it in terms of those distances, I think that helps. You can also recognize that mathematically all we really did in this equation is we added and subtracted quantities from the same side. So we haven't changed anything, but we've rearranged the terms in a way that's going to become useful. Then you can just rearrange this here where you substituted in here an eg over 2 for ei. So I've just rearranged. So it's the same quantity as we talked about. So if I substitute this in, this expression right here, into the argument on the top of the numerator of this exponential, this is what we get. We get something that looks like this. cv plus is the neutral concentration of vacancies-- the concentration of neutral vacancies times this exponential, times another somewhat more complicated looking exponential. So we've just rearranged these energy quantities. But I've done it in a particular way. And the reason we've done it, we factored out this ei minus ef is because that is directly-- that exponential of that is directly related to n over ni. That's why we wrote it in terms of this expression. Because, in fact, you know that the electron concentration divided by ni is just exponentially dependent on the distance between the Fermi level nei. That's something that we learned from Chapter 1. So that over kt, the exponential of that over kt, that's just n over ni. So I have something in this equation that looks just like that. So in fact, if I invert this, ni over n is just the ei minus e Fermi over kt, just inverting that. So this expression right here, this exponential, I'm going to be able to put in a term ni over n, replace Boris. In fact, that's exactly what we do here. In this equation that's in the red box, we can write then the concentration of v plus as ni over n times the concentration of neutral vacancies times 1 more exponential factor, this e to the minus, and then this everything in curly brackets. Well, let's look at the numerator here of what's in curly brackets. This eg over 2, if I expand this out, there's a negative sign in here. I've used this simple mathematical expression here, eg over 2, plus ev plus, minus the valence band energy. Well, just rearranging, that's ev plus minus this quantity here, the valence band energy plus eg over 2. Well, that is just ei, what's in parentheses, right? What's right here is just the definition of the mid-gap point ei, the intrinsic energy. So this numerator up here in this exponential can be written as ev plus minus ei. Well, then the exponential of that whole thing over kt, that's just equal to, essentially, the concentration of-- intrinsic concentration of v plus, of positively charged vacancies. So basically what happens is, this expression in the rectangular red box shows me that I can write the concentration of v plus under extrinsic conditions. I can write it as the concentration of v plus under intrinsic conditions times ni over n. So I immediately have factored out the dopant dependence. And so if I substitute in here-- well, it's simple. If I substitute in here n equals ni, this term just goes to 1, and then the concentration is just the intrinsic concentration. But if I were to pump ni way up, or n way up, let's say I make n very large, this number becomes very small and the concentration of v plus goes down. Alternatively, I can make n very small by going to very heavy p type material. So I make n's very small, that pumps up this number-- this whole quantity. So in p type material, this concentration is going to be very high. It's the exact same equation we just learned that you did in your homework problem where you were manipulating the energy levels and subtracting all these energy differences in the band gap. The only difference is, now we have a more convenient way of remembering it and of writing it in terms of the ratio of the dopant concentration to the intrinsic concentration. And that's very convenient for thinking about these concentration dependent effects. So if we go on to slide number 5, then basically what I've just written down is that we can write cv plus as ni over n times the concentration under intrinsic conditions. Or if you want to invert this, it's a little bit easier to think it either in terms of p over ni, or n over ni, you can just write ni over n. Well, you know what n is, right? pn product is always equal to ni squared. So you can write n as being equal to ni squared over p. So if we want to write it in terms of the majority carrier given this expression, then I can substitute for this lower n in the denominator, ni squared over p. And what I end up with is p over ni times that concentration. So clearly, the concentration-- cv plus goes up in heavily p type material. And that's kind of what we know is saying mathematically what we knew intuitively. ev plus is down here. When I make the material very heavily p type, I bring the Fermi level down towards that. And as that distance decreases, I'm going to pump up the concentration of v plus vacancies. And in fact, it's directly proportional to p over ni. Similarly, for the double positively charged, I can write it as cv plus plus is equal to p over ni squared. It turns out there's a square quantity in there. cv minus is what we just derived is n over ni times the intrinsic concentration of cv minus, and cv double minus, depends similarly with the n over ni squared. So in general, you could write a general using this derivation, sort of a general rule that the concentration of any vacancy in any charged state r, r could be zero for neutral, it could be plus 1 minus 1, that concentration under extrinsic conditions is just n over ni to the minus r times the concentration of that species under intrinsic conditions. So all this is saying, that's a generalized term, is that the concentrations of charge point defects and, of course, the total point defect populations increase or decrease directly proportional to n over ni. So with all this mathematical manipulation, what it boils down to physically, if the dopants diffuse using these point defects, the vacant charge vacancies are interstitial. Then the diffusivity of the pair that is of the dopant and the charged vacancy, or interstitial, is proportional to the point defect concentration. Then the total diffusivity will follow these same trends. So if I-- as the concentration of vacancies goes up because I'm moving the Fermi level up very high, so I get-- let's say I get a lot more of these cv minuses, total concentration of vacancies goes up. If I have a diffuser like antimony that diffuses with vacancies, then you expect its diffusivity to be enhanced because it has more vacancies around it to diffuse with. And it should be enhanced according to this n over ni expression. So the higher I make n over ni, the more of these vacancies, the more the diffusion coefficient should go up. That's the general argument. So what evidence do we have of this? Let me show you some experimental data which indicates, although it doesn't tell you that there are vacancies or interstitials involved, but at least indicates these dependencies make some sense-- the Fermi level dependence, that is. And here's some experiments that are called ISO concentration experiments-- and you'll see in a moment why we call them ISO concentration-- indicating the dependence of the diffusion coefficient on the concentration. So you might have boron 10 diffusing in a boron 11 background, for example. And so just-- let's take a look at what we mean by this. This is an ISO concentration diffusion experiment. So if we were to plot concentration of a species as a function of depth, what we do is we put two species in the sample. The first one is a background species, which I've shown here by this orange box, or this constant concentration profile in this region. So this could be my B11, so it's boron 11. It sets the Fermi level to a constant value in this one region, right, because it's a high concentration of dopant that's p-type, so it defines p over ni. And then underneath that at a smaller concentration, I could put boron 11, which is the dopant that I want to study. I want to study its diffusion coefficient. So I do a diffusion experiment where I start with an initial profile, a Gaussian profile of boron, I let it diffuse. All of it in a background concentration of high concentration boron 10. OK. I measured the diffusivity. Now, I go again and I take another sample. And this time I put a higher concentration of B10, again, constant in space but higher than it was in the previous sample. And what you see is that the diffusion of the dopant of low concentration dopant is more enhanced now. And you keep doing this ISO concentration, you keep putting in this box like profile as the background at higher and higher concentrations, and you measure each sample and you measure its diffusivity. So the nice thing about that is the profiles remain Gaussian because in space, the diffusion of the dopant in any one of these regions is a constant now, as long as you stay inside the orange box. It's just higher than it would be in the absence of the background dopant. So it makes analysis of the profiles a lot easier. You can still get Gaussian diffusion. So people have done these type of background concentration studies where they move the Fermi level up and down with another species. In fact, here is some data I took from a literature on slide number 7 from Marc Law's work published back in 1993-- so it's about 10 years old now-- where they did just that. They had a background concentration of anti-dopant that they created with, say, arsenic. And then they looked at diffusion of phosphorus underneath it. The Fermi level was constant in each one of these samples. They got this triangle, and they got, say, the electron concentration was 10 of the 19th here, going all the way up to low 10 of the 20. And then they extracted the diffusivity of the phosphorus as a function of the background electron concentration. And in fact, you see this is the experimental values, diffusivity going up very rapidly. When you get above about, oh, say, mid 10 to the 18th, this was at 1,000 degrees. Well, what is ni equal to at 1,000? Maybe you remember from the homework roughly? About mid 10 to the 18th, in that range. So you see, that's exactly what's happening, the passivity is a constant. This was the calculated. This is the calculated theoretical line where they use this type of expression. So the effect of diffusivity was given by a constant, d0 plus d minus times n over ni, plus d double minus times n over ni squared. So there's three terms here. And what this equation tells you is, well, if I plug in n equals ni, then it becomes a constant, OK? At ni or below, this whole thing goes to a constant number. And so if you go below ni, indeed, the diffusivity is approaching a constant here. It just looks like it's about 1 and 1/2 times 10 to the minus 14th. As I crank up n over ni, this time right here starts to take over and over ni. And you see an increase. And I crank up n over and ni even higher, see here I get a factor of 10 or 20. This n over ni squared kicks in and, indeed, you see this square dependence. So this is based on experimental data, and it fits this type of empirical fit quite well. Now, what are these numbers? And what is the meaning of d zero, d minus, or d double minus? Well, we don't really know the meaning. But what we assign the meaning to is, we presume that this coefficient d minus has something to do with the diffusivity of the pair of the dopant with the single negatively charged point defect, whatever it should be. So it could be that the pair of the phosphorus pairing with d minus. So that diffusivity value has to do with that pair diffusivity. This term here has to do with the pairing of the dopant with v double minus. Or if you believe in interstitials, the I double minus, whichever. These experiments don't tell you whether it's a dop-- whether it's a vacancy or interstitial enhancing it. All it says is that at some point defect it has-- the concentration of that point defect depends on the Fermi level because it's charged. So as I move the Fermi level up by increasing n over ni, the diffusivity overall goes up according to n over ni. That's basically what it tells us. So those are the types-- one type of experiment people have used to observe these so-called Fermi level effects. So what we do based on this experiment showing on slide number 8, we write the diffusivity in terms of the local carrier concentration at each point in space in the sample. So here's an example. Equation 2 shows you the case for an n type dopant, a convenient way to write it as the effective diffusivity of a. a could be the arsenic, or phosphorous, or whatever, is a constant d0, plus d minus times n over ni, plus d double minus n over ni squared. So we write that type of expression. And we write a very much analogous type of expression for p type dopants. But now, the dependency is on p over ni. So presumably, this term here, this d plus, refers to the diffusion of the dopant, maybe boron, with a single positively charged vacancy, for example. So as I increase p over ni, that v concentration goes way up and so does the effective diffusivity. So again, we talked about what each one of these d's corresponds to. So again, if I'm under intrinsic conditions and I have an n type dopant where intrinsic means p equals n equals ni, that's the definition of intrinsic. But I just substitute in here for n, ni, and you see that the diffusivity is indeed a constant. It's the sum of three numbers, but it's a constant number. It just depends-- doesn't depend on the local concentration when you're intrinsic. And each individual diffusivity, this d to the r power, or if you want to call it-- it's not really to a power, it's just a symbol, a superscripted symbol that tells you the charge of the point defect, each one of these diffusivities is exponentially activated. You can write it in Arrhenius type fashion. So d is equal to d dot zero. This is using the SUPREM-- you should learn this because you'll use it in your homework-- the SUPREM sort of notation. Any diffusivity is d dot zero. That's the pre-exponential times e to the minus d dot e. That's the activation energy divided by kt. So each one of these terms has this exponentially activated. So sometimes this equation, equation number two here, is rewritten in a slightly different fashion. People like to talk about these parameters beta and gamma. Beta is defined as the ratio of d minus to d zero. So it's that ratio. And gamma is defined as the ratio of d double minus to d zero. So when I make that definition, then I can rewrite equation 2 in this fashion. The effective diffusivity is da star, where this is under intrinsic conditions, and you factor that out, and it's 1 plus beta n over ni plus gamma n over ni squared divided by 1 plus beta plus gamma. So it's just another mathematical formulation. People sometimes like to think about these beta and gamma terms, the ratios of the diffusivities, rather than the absolute numbers. For a p type dopant in this equation, you would just replace the n by p, basically, for a p type dopant. And then the beta and gamma are redefined according to the appropriate positively charged diffusivities. So again, this is the, quote, unquote, "Fermi model." When you run SUPREM IV, as you'll do for your next homework set, and you use a Fermi model, you're invoking this type of concentration dependence on the diffusivity for the dopant. So if we go on to slide 11, just to show you, I've taken from Tables 7-5 in the text. These are the quantities. This is the way we write the diffusivity, and these are the quantities that are in this equation. There's the pre-exponential factor, d dot zero, and then there's the activation energy. So just here are some examples. The first two rows refer to the d zero term, both the pre-exponential and the activation energy. And the last row, a couple of rows, refer to the double minus term. So just from looking at this chart, let's say we take the case of arsenic. And these are the numbers I took out of your text, and some of these are also used in SUPREM. Based on looking at this chart, people have fit data to arsenic. And what can you say about the different types of point defects that arsenic supposedly diffuses with? What are they based on-- if there's no number in the chart, then there's nothing been observed for that. So arsenic diffuses with what types of point defects? Single negatively charged and neutral, just because there's nothing else filled in in the chart. So people have observed for arsenic, primarily, an n over ni dependence. For phosphorus, how about phosphorus? What is it diffused with based on this chart? Got neutral, single, and double negatively charged. So there's three terms. And I showed you that when we saw Mark Law's data. He had those three terms. He fit the arsenic-- the phosphorus data back on slide 7. He fit this to a three term type of model. So these three terms come from experiments like the one that he did. OK. So these are the Fermi models for extrinsic diffusion. So when n is greater than ni, or when p is greater than ni, we use these types of equations. And this is what's used in your process simulator SUPREM IV. OK. Let's go on to page number 10. So on slide number 10, I just want to go over a little example just to give you a feel for how these numbers work. An example asks you to calculate the effective diffusion coefficient at 1,000 degrees for two different box-shaped arsenic profiles. So they're going to give you an easy profile. It'll be a box like thing. And there are two different ones. One is doped at 10 to the 18th, and the other profile is doped at 10 to the 20. So the first thing you need to do when you're calculating any of these is figure out for the temperature you're at, what is ni, because everything varies. If you're less than ni, ni or below, then you have a constant diffusivity and you just plug in for n, n equals ni, right? If you're above ni, then the n is equal to the-- essentially, whatever the dopant concentration is because it's being controlled extrinsically by the dopant concentration. The carriers come primarily from the dopant. So in this example, we calculate ni to be about 710 to the 18th. So if I have a dopant profile that's at 1 E18, well, that's much less than ni. Then what is n? n is equal to ni, right, the intrinsic carrier concentration. Because that's just due to the thermal activation of the breaking of bonds. So if you just substitute in here d0, which we got from the prior-- this is the numbers we get from the prior table on the prior slide, and d minus. And again, n over ni is 1, so this term is just being multiplied by 1. There's a 1 here in front of it. Add those up and you get a number that's about four-- 1.4 times 10 to the minus 15th centimeters squared per second. Again, we should be aware of the units. So you say, OK, when I use the table and I use the two term model, this is what I get. Now, a sanity check on that value might be when we were talking earlier in Chapter 7, we talked about intrinsic diffusion, and we didn't give a two term model. We just gave a constant diffusivity number, which was obtained by fitting a single activation energy to those expressions. So the question is, when I use this two term model, how does my number compare to the case when I use the single term model that's shown in Table 7-3? And in Table 7-3, you were given this number here, this exponential dependence. The activation energy, if we want to average out these activation energies, you were given was 3.99, and the pre-exponential was 9.17. When you calculate that out at 1,000, you get about 1.48 times 10 to the minus 15th, very, very close. So again, at a sanity check, this two term model is not screwing up your intrinsic diffusion coefficient. You're getting exactly what you would have gotten had you gone back to the simpler table and the simpler calculation in Table 7-3. But now if we do the next part, how about for the case where the profile is still 1 E20. Well, at 1 E20, what do you know? Well, you know n is much, much greater than ni. So n over ni is large, that means the concentration of single negatively charged point defect, cv minus goes way up. So the diffusivity should be enhanced. So now solved, right, at that same equation, here's the first term. The d zero term doesn't change, right, because it's not multiplied by anything. This is the d minus term. But what it's multiplied by here is n over ni, which is a big number, 10 to the 20 over 710 to the 18th. That's more than an order of magnitude. That's two orders of magnitude. So you're really pumping up this term. So now what do you get? 1.6 times 10 to the minus 14th. So the highly doped layer has a 10-fold higher diffusivity diffusion coefficient in the extrinsic material. So it'll be diffusing with a diffusivity that's 10 times greater. And that's exactly how these calculations go. Now, if you have the concentration changing in space at every single point along the profile, the computer has to keep track of the diffusivity value at every single point and apply that correct diffusivity value when it's doing the calculation of a diffusion profile. OK. So let's go on to slide 12. So basically, the consequence of this from a practical point of view is that the profiles are very steep. They have-- they're flat topped and they fall off rapidly. And why is that? Well, if I'm working-- walking my way along this profile, first of all, I'm much, much greater than ni by over a factor of 10. And the diffusivity here is-- it's a large number. So that means it's going to-- when you're high, when you're above and over ni, you're going to be diffusing very fast. So these guys get over to the right very quickly. But as I start to go to-- my concentration starts to drop at the diffusion front, when I get close to n equals ni, the diffusivity is dropping like a rocket, right? Because n over ni is going down as I move down this profile. In fact, n over ni is only 1 here. So what's happening is the diffusivity is falling off very rapidly at the diffusion front. So it tends to make a very sharp, abrupt profile there, because as you walk down here, your diffusion coefficient is going way down, it's slowing down. So you get these box-like profiles. A box-like profile originating from-- and again, this assumption was that you have a constant source surface concentration of arsenic, or whatever so that the surface concentration is about 5, 10 to the 20. Instead of getting the complementary error function, which is here, you get a more box-like profile. And that's exactly because of the concentration dependence to the diffusivity. And people observe-- you observe these all the time when you do arsenic source strains. It's never-- it never looks like a complementary error function. It's always very box like, and that's why. Besides source drains, actually, you can see this-- we always talk about Moss beds. But if you're doing bipolar technology making NPN structures, you see the exact-- the same issues. And here's an example. On the left, I'm showing a starting structure for an NPN bipolar transistor. So this is a plot on a semi-log scale of concentration as a function of distance into the device. And what it is is that the collector down here is lightly doped around 10 of the 16th n-type with phosphorus. Here's the base. We're assuming the base was grown by epitaxial crystal growth. And we'll talk later in the class what that is in the course. But it has a very abrupt, constant doping profile. Like this is about 1,000 angstroms. And then you deposit a layer of n plus polysilicon on top, and that gives you this high arsenic concentration. Now, what you're interested in is annealing this to drive in the arsenic a little ways into the epi, into the base for 1,000 degrees for 30 minutes. And you want to see what the profile looks like. Well, on the right, I'm showing the SUPREM simulation that includes both the electric field effects that Maggie talked about last time, as well as this concentration, or Fermi level effect. And you can see what that looks like. Here is, again, concentration versus distance. And look at the arsenic profile. Here it is in the poly relatively constant. It is indeed very box like. It doesn't look like a complementary error function. It's almost constant, and then shuts off very quickly right here. That's due to the concentration dependent diffusivity. Here's the boron profile. It kind of Gaussian-ish, sort of, but it has this little dip in it right here, this divot, right where the arsenic-- right where the pn junction, the metallurgical junction happens. And in fact, this little divot is due to the electric field effect, which significantly impacts the profile, the boron profile, near the junction. Remember, the electric field here is being generated by-- mostly by this rapidly decreasing arsenic concentration, and the boron feels that and gets-- its diffusion gets modified. There's no way you would calculate this by hand, you would come up with anything like that. But this is exactly what is simulated in the numerical simulator. And this is what people measure by SIMS and things. So you really need numerical simulation to accurately model modern devices bipolar, be they MOSFETs or any other type of device because of these high concentration effects. OK. So let me go on to talk-- so we've talked about the electric field effect and the high concentration and what happens. There's another effect when we get to an interface that we're going to need to take into account that's going to be important in determining practical profiles. And this is called segregation. We know that dopants have different solubilities in different materials. So let's say I have a dopant and it's coming up against an oxide layer. It's in Silicon and there's an oxide layer right next door, next to it. It could have two layers right next door, could be silicon and nitride, or whatever-- silicon nitride. But they have different solubilities and so they tend to redistribute across an interface between two materials until something called the chemical potential is the same. And basically, the ratio of the equilibrium dopant concentration on each side of the interface is the segregation coefficient. So we've already seen segregation coefficients. We talked about it in Chapter 3 on crystal growth. We defined a segregation coefficient k0 with respect to crystal growth to be the concentration in the solid of the dopant, say, the boron or the arsenic, divided by the concentration in the liquid phase. So there we had an interface. One was the same material, it was silicon. It's just that one case it was liquid, the other case it wasn't solid. We can do this exact same thing, we can define a segregation coefficient, in general, between two materials, material A and material B in this segregation coefficient k and k0, or it may be-- the subscript may indicate which two materials may be from silicon to silicon dioxide, whatever, is in general-- is the concentration in material B divided by the concentration of material A, just the ratio. So given long enough time, if you were to put some dopant in this material and let it-- and heat it up and let it move around, it's going to arrange itself so that the ratio of the concentrations on either side of the interface is exactly k, k0. If k0 is 1, then it will arrange itself to have the same concentration at that interface on either side. If k0 is 10, it's going to want to have a factor of 10 difference in concentration going from material B up to material A. So there is-- this different solubility causes this equilibrium segregation. OK. When I'm calculating in SUPREM, when I'm calculating the interface flux, what's the boundary condition at the interface? Well, you saw-- if you're inside a given material, you saw last time in the finite difference case, if you're inside silicon one given material, you could use your neighboring concentrations to figure out a flux to either side. It's a little bit different if you have an interface flux. In fact, what you write is that right at this interface between A and B, the flux F to the right is equal to-- now, we don't use a diffusion coefficient, it's equal to a transport, interface transport coefficient h, which has units of length per unit time in, for example, centimeters per second. It gives you an idea how fast this thing is going to reach its equilibrium concentration. It's h times this ratio times this quantity, ca minus cb over k naught. Where again, k0 is defined according to this. So if you just look at this flux equation for a moment, what is it saying? Well, it says if, let's say, h is some reasonable value, if ca is much, much different, the concentration on this side, then cb over k0, this difference is going to be a large number. There's going to be a big flux. It's going to allow flux to flow, basically, until ca approaches cb over k0. And then the flux goes to zero. So it's going to force then the concentration profile across that interface to be pegged to have a difference, a ratio, that's equal to k0, and how rapidly it approaches that equilibrium. Well, that will be given to a certain extent by what h value you use for that. Because at each time step, you have a flux that's equal to ca minus cb over k0 times the h value. If h is very small, it'll take a long time to reach the equilibrium segregated condition. If h is very rapid-- high, then it reaches that very quickly. So h is a measure of how easily the species is transported across the interface. So a very common thing that you will see happening when you anneal wafers, or when you grow oxides, is segregation at an interface between silicon and silicon dioxide. And so let's look at that case. By the way, I should say that segregation coefficients, the ratio of some species on one side of an interface to another material, the other side of the interface, it sounds trivial, but it's-- actually, in practice, it's very hard. You might say, well, why don't you just use SIMS? Just profile through the sample and measure the concentration on this side and the concentration on that side. Well, a lot of techniques like SIMS near an interface, they don't perform very well. Because as you get close to the interface, you're changing the material and the sputter rate starts to change. All the assumptions that you need to make in SIMS tend to be degraded, to a certain extent. So you would think we have perfect numbers for this but we don't. We have rough numbers. And these are the rough numbers that are in SUPREM. They are-- the k values, of course, are adjustable, and you can adjust them to fit your-- whatever your experiments show. But k0, so the ratio of the concentration of the dopant in silicon-- to that in silicon dioxide. For boron, it's less than 1, it's 0.3. So what that means is that boron wants to go into the oxide layers, concentration in the silicon will be lower. So boron tends to be depleted in the silicon and higher in the oxide. Arsenic, the n-type dopants are just the opposite. They tend to segregate into the silicon. So they pile up at the silicon, and they're lower in the oxide. And this is giving them a ratio of 10. Of course, it also depends on temperature. So you need to take that into account. So in fact, if we go to slide 16, we can see an example of this. These are some SUPREM simulations. And what's been done here in the upper left, you're seeing the simulation of the oxidation of a uniformly doped substrate. So initially, it was uniformly doped. So the concentration of boron at the surface was all the same, say, 10 to the 18th, throughout the entire wafer from the surface all the way through. And you take that and you put it in a furnace and you oxidize it. Well, you can see where the oxide was grown. All of a sudden, the boron has a profile to it. And it's a little hard to tell the profile from these contours. Each contour, each color represents a different boron concentration. But if you want to take a cut right through the center, this is what the boron looks like. There's a certain concentration of boron in the oxide here, around 6 or 7 times 10 to the 17th. Then there's a drop. There's a factor of 3 drop because, again, we said the segregation coefficient was about 0.3. And it's depleted somewhat in the silicon. So it's come down in the silicon. That's because there's been an interface flux. The concentration originally in the oxide was 0. The concentration in here was about 10 to the 18th in the bulk. And there's been a flux from the silicon to the oxide because in equilibrium, it wants to set this ratio to be equal to 3. And so you see this depleted boron concentration in the silicon. How about n-type dopants? Well, arsenic and phosphorus we said their segregation coefficient is 10. So it's a positive number. So they actually tend to pile up at the interface, and they're low concentration in the oxide. So they tend to segregate into the silicon. This arsenic profile looks a little steeper than phosphorus because it has a slower diffusivity. So it's not getting to the interface. It's not able to diffuse across the interface, or transport across it as rapidly because the delivery of the arsenic from the bulk is a little bit slower. So there's an example of segregation that happens during an oxidation process. So that's something that's incorporated in SUPREM and is very commonly observed. The next interfacial effect that I'll talk about, interfacial dopant pileup. Be careful, this is not equilibrium segregation. This gets a little tricky because we just talked about cross an interface, the doping concentration being different. This is actually piling up right at the interface. So it's a little bit different. It's a monolayer type effect. And particularly as junctions become shallow, it's observed that some of the dopants pile up in this very narrow interfacial layer at the interface between oxide and silicon. This pileup is separate from and is larger than the normal equilibrium segregation that might occur during annealing. So just as an example, if this is the oxide to the left, the silicon to the right, normal segregation would give you a little height difference like this. This interfacial pileup gives you a height difference that's even larger, and it's just at the interface, and it integrates to some dose. It may be as thin as a monolayer, but it can actually-- the interface essentially acting as a sink for the dopants. And it can trap up to about 10 to the 15th per square at that interface. That's a pretty big number. If I consider that I might only be implanting 2 times 10 to the 15th arsenic, and I implant it right near the surface, a lot of that arsenic gets sucked up right into that interface and never become electrically active. So when people were implanting the arsenic really deep into the substrate, nobody noticed it because the concentration near the surface was low. But now that we have shallow implants, a lot of people are finding that their dopants are sucked up into that interface and so they lose a lot of effective dose that they would have expected to have. So people-- SUPREM IV now includes an interfacial segregation module to model this effect. In fact, on slide 18 shows some data of what happens. Here on the left, this is from Cazenave, IEDM, 1998. This is concentration of arsenic as a function of depth, and the "as implanted" is shown here in blue with the boxes. And this is annealed, the rapid thermal anneal, 1,050 for 30 seconds. There's a thin oxide layer, by the way, that was on the surface. He was using as a cap when he did this anneal. And then he stripped the oxide before doing the SIMS. And this is the red layer, the red profile. If you integrate it, its area, or its dose, is only-- is 6 times 10 to the 14th. So 30% of the arsenic was lost. Now, he had an oxide cap there, hoping to keep the arsenic in the sample, hoping to keep the arsenic from evaporating. It didn't evaporate, it got stuck at the interface and then was stripped off with the HF. So he prevented evaporation, but it is a problem. And in fact, if you go on the SIMS [INAUDIBLE] at the right, he tried to understand what was going on. What he saw, if he did not do the HF dip after the rapid thermal anneal, he saw this red profile. So indeed, in the oxide right near the interface-- and again, SIMS doesn't resolve the interface very well-- he saw a pileup in that interfacial region. And that's where all that extra dose went. Then when you strip the oxide, of course, everything piled up, but the interface is gone and you end up with a much lower dose. So it's not very well understood. But you can imagine, this is very important for source drains. If we go on to slide number 19, in fact, if we look at right near the channel, we implant these very shallow junctions called the source and drain extensions, sometimes called the tips. These junctions today are on the order of 500 angstroms or less, maybe 300 angstroms. So they're very, very shallow. So you need a very shallow implant right near the surface. And lo and behold, what's above that implant, of course, is an oxide. So you have an interface between silicon and oxide, and a lot of dopant right underneath there. So you will go ahead, you will design your source train extension to give you the right sheet resistance, and then you go measure it and you find out it's half-- or it's twice what the sheet resistance you would design. Because half your dose has segregated to that interface where it's not electrically active. So it sounds like a subtle effect, but it's actually extremely important, particularly, when you need low sheet resistance contacts to the channel. It's very annoying that half of your dopant gets sucked up by that oxide. So it's just a fact of life and something you need to take into account. SUPREM does have empirical models. Of course, you have to adjust all the coefficients to fit your particular data. OK. So those are some special effects that I wanted to include and when we talk about dopant diffusion. And now, I want to go on and talk about atomic scale model. Everything we've talked about so far has been-- except for several lectures ago-- it's been, really, at the macroscopic diffusion. We defined Fick's law, you added electric field effects and Fermi level effects. But a lot of effects, especially, those like OED, oxidation enhanced diffusion and TED, transient enhanced diffusion, they're action at a distance. They're very important experimentally, but they cannot be explained by these simple macroscopic models. So we really want to look at dopant diffusion as best as we can at the level of the atomic scale. And here is a way of doing that, showing you on slide 21. There are two different mechanisms pictured on this slide. One that we've talked about, we haven't really gone through the specifics of it. But we've hinted at this vacancy assisted mechanism. So imagine this chemical equation. I have a dopant A, it could be arsenic, boron, whatever. And it gets-- it's paired with a vacancy right nearby. And it goes into a pair, an av pair. av pair, what I mean by pair, well, they maintain within a lattice constant are two of each other. There's this pair and they move as a pair throughout the lattice. So how can that happen? Well, you can imagine this-- let's say this is my first time step. Here's a vacancy right here. Here's a dopant atom pictured in the light. And the silicon atoms are all the dark black. So this dopant here may exchange site with the vacancy, OK? The dopant moves to this vacancy site, the vacancy moves up there. All right, fine. Well, if they just switch back, they haven't moved anywhere. They're just switching back and forth. That doesn't do you any good. But imagine I move the dopant to this point and the vacancy moves up there. Now, all of a sudden, this vacancy is sitting here. It can move independently for a second of the dopant and exchange with its neighboring silicon atom. So now, I have a vacancy here and a silicon atom here. And the vacancy can move over here. So all of a sudden now, the vacancy is sitting over here now, again, next to that dopant atom. Now, it can exchange with it. So just by moving the vacancy around in a circular motion around just open atom and exchanging sites with it, the vacancy and the silicon can essentially move as a pair gradually through the lattice. And it's much easier to do that than if you imagine there was no vacancy there, and every time the dopant had to move, it had to break the bonds. You don't need to do that when you have a vacancy. So this pairing of the vacancies is believed to be a very efficient mechanism for dopants to move in silicon. You can do something similar with interstitials, or interstitial c assisted mechanisms. So here I have a chemical equation, dopant a, plus an interstitial forms, an ai pair. And it will help in this assist diffusion. Well, here again, my pink atom here is the dopant. Here is an interstitial. It can come along and kick out the dopant off a lattice site, make it interstitial, and help it get moving that way. Or here's an interstitial c, what is that? Here is a silicon atom sharing a lattice site with another silicon atom. It can then start to share with the dopant atom. And then the dopant atom can move along and share with the next bonded silicon atom. And so it can move along, perhaps, the bond direction as an interstitial c as two objects sort of sharing the same lattice site. So either interstitials excess hanging around, or vacancies hanging around, either one, these point defects can assist with the motion of the dopant in the lattice. So we're going to make some inferences about mechanisms. I think we talked about this a little bit last time, or a couple of lectures ago when we talked about stacking faults and oxidation. This is a picture of local oxidation. So over on the right, I'm having oxidation take place. On the left, I'm underneath a nitride so there's no oxidation. And what people see is that deep in the substrate underneath where you're oxidizing, you see that oxidation induced stacking faults. Remember, we said they grow underneath the region where there's oxidizing underneath. The region where there's no oxidation, they don't, they stay constant. So it's believed that oxidation injects from the surface, injects interstitials into the bulk, which aids the growth of stacking faults, and also can enhance the diffusion of dopants like boron. Now similarly, it's also been found, if I took this starting wafer and instead of putting in an oxidizing furnace and growing an oxide, I put it in a furnace with ammonia. And I grow over here in this region on the right, I would grow silicon nitride. So I'm nitriding. I'm thermally nitriding. I'm reacting silicon with ammonia. People have found that this actually has the opposite effect, that boron diffusion is retarded, and stacking faults actually shrink. So people believe that thermal nitration injects vacancies so-- by these inferences by observation of stacking faults and things. So the nice thing now is, if I do an experiment, I can inject interstitials by oxidizing in one region. I can tend to take the wafer, or another wafer, and put it in a furnace with ammonia and inject vacancies. And I can see what happens to the dopant profiles under these different injection conditions. And then, therefore, decide does the dopant diffuse faster with interstitials? Well, then, it must tend to diffuse with interstitial pairs. So there's a way to make an inference about the diffusion mechanism using oxidation and nitridation. So there's been a number of experiments that have been done over the years, the last 20 years, and some of them are shown here on the results on slide 23. And what people have seen over the years is that boron and phosphorus, and to a little bit, a certain extent arsenic, they have enhanced diffusion coefficients under the influence of thermal oxidation. So during a thermal oxidation, the boron, the phosphorus diffusivity tend to go up. Antimony is just the opposite. It slows down compared to inert when you're in an oxidizing condition. So here's just an example of a plot. On the left axis is concentration versus depth. And so here's an example of antimony. Antimony was diffused, and this profile here without any dots on it is the inert case. And if you look at the junction depth for the case where it was diffusing under oxidation, it's actually shallower. So there was less diffusion of the antimony when you diffused it in the furnace-- the same temperature, but under oxidizing ambient. Boron is just the opposite. Look at boron. This is the inert case for boron. The junction depth here is about 0.4 microns. When you diffuse it with oxidation going on above it, again, it's not touching the boron, the oxidation is taking place up high in the sample. The boron has a junction depth about 0.8. So it's dramatically enhanced. The idea is that oxidation increases the concentration of interstitials, silicon interstitials, ci. Now, it decreases cv from their equilibrium values. So from this, I would conclude that boron diffuses with i, with interstitials, because I put more i in the sample, it goes faster. And antimony diffuses probably more with vacancies. Because when I decrease the vacancy population by injecting a lot of excess interstitials, antimony slows down. So antimony must favor diffusion with vacancies. So it's by these types of experiments, injecting interstitials with oxide, oxidation, or thermal nitridation to inject vacancies, that people try to figure out what is the mechanism of the dopant diffusion. So let's go on to slide 24. And in fact, the interesting thing-- the injected interstitial level depends on the generation rate at the very interface between the oxide and the silicon, and the recombination rate at that interface. So there's certain generation rate of these interstitials, and certain number combine-- recombine. Those adult go into the bulk. And in fact, the concentration of excess of interstitials depends on the oxidation rate. That's what people find, which is interesting. So if I oxidize faster at a given temperature, I'm going to get a higher oxidation, or interstitial concentration. So for example, if I plot the interstitial supersaturation ratio-- and this is the ratio of ci divided by ci star, where ci star means the concentration of interstitials in equilibrium. So that's in a neutral ambient without any perturbing due to oxidation. And I look at this ratio, the dashed line is for wet 002, and the solid is for dry. Well, we know the oxidation rate in water, in 1 O2 is a lot faster. You see, the whole dashed line is higher. So if I really, really wanted to enhance the diffusion of boron, what would I do? I could put it in the substrate, and I would subject the substrate to wet oxidation at a given temperature. And that would really boost up. I could make the boron diffuse a lot faster. And generally, you want to slow things down. So this-- as you can see, this depends primarily upon temperature. There's a little influence of the wet versus dry. But the big influence here is on temperature. And the interstitial supersaturation ratio is much larger at low oxidation temperatures. That's because ci star is going down rapidly while you continue to inject a lot of interstitial ci. So we expect the enhancement in diffusion-- or diffusivity to be small at high temperatures, like, at 1,200, where the supersaturation ratio is only a factor of 2. But to be large at low temperature, say, 800, very large, where you can get ratios of 10, or 100, 100 times faster diffusion than you would get under equilibrium non-oxidizing conditions. So this tells us where OED, where ORD is going to be most prevalent at low temperatures. So how do people model the interstitial and vacancy components of diffusion? Well, here's-- again, I just want to show you some experimental data, some SIMS plots with both arsenic and antimony in the sample at the same time. So the red here is shown under inert conditions. So no oxidation. You can see the arsenic is abrupt. If you oxidize it, the arsenic diffuses a little faster. So it's being influenced by the interstitials. Antimony, at the same time under inert conditions, is a little broader. But if you oxidize it, it maintains-- it diffuses less. So here's OED, oxidation enhanced diffusion of arsenic taking place at the same time as ORD, oxidation retarded diffusion, of antimony. So both types of point defects, interstitials and vacancies, are important in diffusing in silicon. So what people do is-- it's somewhat empirical but it works-- is you say that the dopants diffuse with a certain fraction, f sub i, of interstitial type diffusion, and a certain fraction f sub c-- f sub b, which is just 1 minus fi, of vacancy type. So we're just going to apportion-- for any given dopant we're going to say, well, X percent, or x fraction, f sub i fraction, is associated with it moving with interstitials. So we write in a very generalized form the diffusivity of any dope, da, is da star, where da star is the normal equilibrium diffusivity measured under inert conditions-- no oxidation, no thermal nitridation. We're not perturbing the surface in any way. So that's da star times this quantity, f sub i, which is a number between zero and 1, times ci over ci star, plus f sub v, times cv over ci star-- cv star. Again, the star means equilibrium, no oxidation or nitridation. So you can see I can enhance the diffusivity just by enhancing ci over ci star, assuming f sub i is greater than zero. So if I have a dopant like boron, people believe that f sub i is 1. If I pump up ci over ci star, then I get a great enhancement proportional to ci or ci star in the diffusivity. So again, oxidation injects interstitials, so it's going to raise ci star, and it reduces vacancy. So this goes down, cv over cv star by a recombination mechanism. And nitridation does exactly the opposite. So this is the mathematical formulation we can use to express these observations. And in fact, you go on to slide 27, people have tried to measure-- they have measured the enhancement of the diffusion, or the retarded diffusion, under different conditions. And these are the f sub i and f sub v values that people-- that are roughly in SUPREM. So what do we see? Well, for boron, f sub i is 1. So they're saying it diffuses entirely by interstitials. So that's roughly what people believe. Phosphorous is close to 1. Arsenic, which is our most popular n-type dopant, unfortunately, is mixed. It diffuses both by interstitial mechanism and by vacancy mechanism. Antimony is just the opposite, entirely by vacancies. So these [INAUDIBLE] numbers. Of course, you can modify them at will in the simulator, but these are the ones that are programmed in into SUPREM IV. So let's go on to slide 28. Again, this is a general formulation for how we write the diffusivity in terms of si, fi and fv. How does this actually relate to our previous description? We keep making-- now, that I've made the model more atomistic, how does it actually relate back to the more macroscopic description? Well, this is the macroscopic way we wrote it, right? We said da effective is just da zero, some number times e to the minus e over kt. Well, I can rewrite this expression on top under inert conditions. Inert meaning, ci over ci star is 1. cv over cv star is 1. So there's no oxidation or nitrogen. Then da is just sum of two terms. The diffusivity of the paired species ai, plus the diffusivity of the paired species av. And in fact, I can break this down even further where I write this diffusivity of the paired ai as a little d, diffusivity of ai times the concentration of c of these ai pairs, divided by ca, plus a comparable expression analogous for vacancies. So what this is saying is, I can sum up the diffusivity. And if I just look at this one term, it's looking like the diffusivity of the pair, say, the dopant paired with the interstitials, times the ratio of its concentration, to the total concentration of the arsenic, or whatever the dopant is. So if I make this go up, then this will go up. So at an atomistic level, we can decompose this effect of diffusivity into these two different mechanisms. So let's go on to slide 29. And, say, there's another way people can look at this atomistic scale reactions and diffusions. And people do it through a chemical reaction. If you're familiar with chemistry, this makes sense to you. If you're not, you think, why am I going to all this effort when I can express it mathematically a little bit differently? But what we say is, the reaction where a substitutional dopant atom a interacts with an interstitial silicon atom to form a mobile species. So the simple reaction says that a, substitutional, plus i, goes to ai. Now, the important thing to realize about this equation is that on the left-hand side, a and i are both immobile. So a is immobile by itself. We're assuming that anytime the atom, the arsenic or the boron, is substitutional on the lattice at high temperature, it can't move. That the only way it can become mobilized is when it's in a pair form with an a right next to an i, or next to a vacancy if you want to do it in terms of vacancies. So the substitutional species themselves are immobile, only as a pair are they immobile. And this is what the model is saying. So we can actually be used to explain a lot of different phenomena at a distance that people have observed. For example, if you have an interstitial supersaturation. So you pump up i a lot. This is going to drive more dopant atoms into the mobile state by shifting the equation to the right, and enhance the dopant diffusivity. And that's OED. So example, if you're a chemist and I tell you I flood the reactor-- I flood the silicon with a lot of i, well, this reaction tends to get-- if we add more i, this reaction gets pushed to the right. So I form more pairs. If I have more pairs, arsenic can diffuse more readily-- or the dopant, and you get OED. Now, the interesting thing is that there's can-- this equation also predicts some effects even under inert conditions. So this can indicate that the interior of the silicon will be injected by this mobile ai species as it diffuses in. And that's going to drive the equation to the left, to this way, releasing interstitials in the interior when the dopant regains its substitutional position. So interestingly, this chemical reaction tells us that silicon interstitials can be pumped into the interior of the sample by dopant diffusion. So for example, let's say I have arsenic, it diffuses in by pairs, it finds its way and settles into a certain position where it's now substitutional. And then what happens, these interstitials are released. So all of a sudden, the arsenic is carried in with it, all these excess interstitials, there's a bunch of interstitials released, they could then impact the dopant-- the diffusion of a dopant nearby. And in fact, that's exactly-- that's exactly what happens in this profile on Page 30 of phosphorus. For years, it was observed that high concentration phosphorus had a kink and a tail. It was kinked region here. And then at lower concentrations, it had a long tail. It was into the substrate. And people thought of all kinds of mechanisms to explain this. Well, one mechanism in terms of this chemical equilibrium formulation that we're talking about today is to say, all right, the phosphorus diffuses with interstitials so they diffuse as a pair. And then, eventually, phosphorus finds a substitutional site. It stays there. And then it's going to release them into the bulk. So all of a sudden, I have a flux here, I have a flux of a pair. And then I get a flux of extra interstitials. This, in turn, enhances the tail diffusivity of the phosphorus profile. So the reason the tail is enhanced, people believe, is because the phosphorus itself is pumping in a whole bunch of interstitials and then releasing them somewhere down in this depth. So that's a way to use this to explain qualitatively the tail region of phosphorus diffusion. On slide 31, there's a famous effect that people call emitter push. Again, they thought of lots of reasons to explain this. What is emitter push? Well, when you're making a bipolar transistor, you have a high concentration emitter-- it could be phosphorus or arsenic. And then you have a more lightly dope base. And then you have a more lightly dope collector. What people observed is, wherever the emitter, the phosphorus was being diffused, right underneath it, the boron base was pushed out, almost like the emitter was pushing the boron faster, to diffuse faster. Far away from the emitter over here, it didn't diffuse quite so much. But underneath it, it diffused quite a bit. And again, there were a lot of models people came up with to explain this. Well, again, people could say that this high concentration of phosphorus is pumping interstitials because p and i are diffusing together. So the interstitials themselves are being carried in towards the base. We get a high supersaturation of these interstitials. They get released when the phosphorus stops diffusing, and they're released into the boron base. This excess interstitials then enhances the boron diffusivity and causes it to push in, because we know that we're-- boron has an f sub i of 1. So this is an interesting effect. We said we could inject interstitials by oxidation and enhance boron. We can also inject interstitials by other processes just by the presence nearby of a high concentration diffusion of another species like phosphorus. So this is called full coupling. Full coupling means that the diffusion of the dopants is affected by the interstitials. And likewise, the diffusion of the interstitials is affected by the presence of dopants. And where those interstitials end up in your wafer, they could impact some other process. So that's what they mean by full coupling in the SUPREM model. So if we go on to slide 32, again, this is that same equation I showed before. And if you're a chemist and you assume chemical equilibrium between these dopants a and the defects i, you can write a law of mass action that says the concentration of the products, c of ai, is just a constant at any given temperature of the multiply, or the product of the reactants ca times ci. So you can actually write this in a chemical equation. And then the neat thing is, given this relationship that it's just the product of ca times ci, I can apply Fick's first law to this mobile species. So if I want to differentiate cai by dx, so this is Fick's first law, it says the flux of the mobile pair is just some constant times the concentration gradient of the mobile pair. Well, I can now apply the chain rule to this. And applying the chain rule, I can see that the flux of ai depends on some diffusivity times the term that goes like the gradient of the arsenic, or the dopant, plus a term that goes like the gradient in the interstitials. So what this is saying is that grains in the defects, as well as gradients in the dopants cause dopant diffusion. When we talked about Fick's law earlier, we said, well, we have a gradient of arsenic, and that's gradient of arsenic is what drives the diffusion. Well, not only will the gradient of arsenic, but somehow if you create by some other mechanism a gradient of interstitials, that will also drive diffusion. So there's a hidden term. And that's because we're doing a pair model. We're saying that arsenic, or boron, or whatever, has to diffuse by means of pairing and so it gives you this extra term. And there are a lot of interesting ways you can accidentally create gradients of interstitials, not even realizing it, and then end up driving dopant diffusion at a faster rate. So on Page 33, this is actually-- I'm not going to derive this, but this is the actual overall flux equation that SUPREM uses, and it's discussed in your text. It's a fairly complicated-- you say that the total flux of boron interstitial pairs is the product of all these terms. Well, there are a-- we can look at the terms and make sense out of them. This dvi star, that's just the inner-- the star indicates its inert low concentration diffusion driven by the dopant gradient, the usual good old Fick's diffusion. And then all the rest are correct in factors. In these large parentheses, we see the high concentration effects due to the Fermi level. So that's what this beta time is due to. The interstitial supersaturation, ci over ci star, again, if I inject interstitials, we know that will cause an enhancement in diffusivity. And this term over here at the very end, partial partial x of the ln of p over ni, that's the electric field effect. So all of these are lumped together in order to calculate the total flux. OK. So let me just summarize. We talked about Fermi level effects. They apply, they're important when the carrier concentration is greater than ni. The diffusivity is dependent upon the local carrier concentration. It's either determined by the diffusion species itself, or by the background doping, whichever is higher. And we tend to get diffusivities of this form, this formulation. This leads to very box-like profiles. We talked about segregation at the oxide interface. It determines the boundary conditions. Boron segregates into the oxide, it's depleted from the silicon. Arsenic, and phosphorus pile up. They go out of the oxide and they go into the silicon. There's also interfacial dopant pileup, which is different from segregation at the oxide silicon interface. And this results in dramatic dose loss, particularly, for shallow source drains. We know from OE-- we know OED, ORD, and growth and shrinkage of stacking faults can be explained by this atomic scale diffusion picture. We said the boron and [? phosph ?] diffuse primarily with interstitials and ammonia, primarily, by vacancy. This has been determined by a lot of experiments. And if we use this chemical equation formula for dopant defect interaction, it can explain a lot of action at a distance effects, like, OED, phosphorus tail, emitter push, things that people had a hard time explaining for many years. So that's about all I have to say today. I know it's a pretty dense lecture. But we'll finish up Chapter 7 on Thursday. And Thursday, remember, your homework is due. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 23_Growth_and_Processing_of_Strained_SiSiGe_and_Stress_Effects_on_Devices.txt | JUDY HOYT: And then on Thursday, we'll have these. I also want to mention, towards the end of lecture today, we'll have the course evaluations for you to fill out. OK, before we start the formal lecture for today, for those of you who were here last time, I wanted to go over something that we had talked about. We didn't have time to go through how you would do these calculations from the last handout, which was handout 36. On slide 27, there was a couple of questions that we were asking. And I thought we'd just spent about five minutes before we start today's formal lecture going through this, at least on the board, so people have an idea of how to make these back-of-the-envelope calculations. So what you're being asked to do here is you're looking at a silicon MOSFET with-- a these are the source drain extensions, these shallow extensions that have a certain depth XJ. These are the deep source and drain that have been silicided. Remember, the last lecture was all about siliciding. So this blue region is supposed to represent a silicided region. And then there's a metal contact and this is highly schematized but imagine this little thing coming down here as your aluminum metal or whatever coming down and touching the silicide and is spaced a certain distance the metal contact is by an insulator from this point here where the silicide starts. And so what we're asked to do is to calculate separately three resistors that are in this problem. There's this resistor here, which represents the resistance of the source drain extension as the current flows through the channel into the extension, which is underneath the spacer. And then there's a resistance associated with this sheet resistance of the silicide arrangement because the current has to get from this point here to the point where the metal contact is up here. And then there's the interfacial resistance or the contact resistance associated with this metal contact. So those are the three. And in a few minutes here, we can just go through that. And I thought it might be useful for us to calculate those. So the first one, we're asked to calculate the-- let's figure out the contact resistance. So you're given the specific contact resistivity, which we talked about in the last lecture as 2 and 1/2 times 10 to the minus 7. And that has units of ohm centimeter squared. It's a resistance times an area. And if I take that-- if I were to get the contact resistance R of that contact, I just have to take that rho sub S value and divide it by the area of the contact. And are told that the area of the contact in number one there on the slide is 0.2 by 1 square micron. So that's just going to be 2.5 times 10 to the minus 7 over 0.25-- so the numbers are easy-- times 10 to the minus 8 square centimeters because 1 micron is 10 to the minus 4 centimeters. And you have to square that. So you just calculate this out. And you end up with 100 ohms for the contact resistance. So that's the resistance between the metal and this contact right here flowing through this face, given a contact area of that size. So that'd be kind of a state of the art contact. So you add about 100 ohms associated with that. You also want to calculate the resistor shown here by this little resistor here, estimate what this resistor is for the current flowing from this point to the point where it just gets into the contact. Well, that's going to be dominated by the sheet resistance of the silicide of this blue region. So if we know the sheet resistance, when we divide by the thickness, we can calculate. For any sheet, you can always calculate the resistance. So to get the sheet resistance of the silicide, well, you're given its resistivity. It's 15 times 10 to the minus 6 ohm centimeter. So it's just going to be its resistive rho divided by the thickness of the sheet. So that's 15 times 10 to the minus 6 ohm centimeter. Remember, that's a resistivity now. That's not a specific contact resistance. And you are told the thickness of that sheet, roughly. And I'm going to ignore the doped region down here because this is a metal. It's a silicide. It has a much lower resistance than the silicon. So I'm going to ignore that. We're just going to calculate the resistivity of that metal sheet. And that thickness is 4 times 10 to the minus 6 centimeters. So the centimeters go out. And you end up with a certain resistance, about 3.75 ohms per square. Remember, that's a sheet resistance. So that's an ohms per square sheet As for current flowing through a sheet, we talked about this, that is a square, so going in this space, coming out that face, where this length and this width are equal. That's the definition of the sheet resistance. So it's about 4 ohms per square, something like that. Then we need to know-- once you know the sheet resistance, the nice thing about once the sheet resistance, that tells you the resistance going through a square. You just use geometry considerations, how many squares does the current go through in flowing from this point here to the contact? Well, it's a half micron wall. And it's a micron into the page. So that's a half a square, basically, because it looks something like this. It's a half micron long. But it goes a micron into the board, into the page. So this is 0.5. And this is 1. So by definition, that's half a square because if it were a full square, there'd be two of these right next to each other. So then you can just get the resistance of a silicide to be a 3.8 ohms per squared times 1/2 a square. And you end up with about 1.9 ohms, very small, comparatively speaking, couple of orders of magnitude, maybe a factor of 50 less than the contact resistance. OK, so that seems reasonable. And then because it's a metal and so it has a very low sheet resistance. So the resistance is small. So on the final one, number three, what you're asked, now we're going to be calculating the source drain extension resistance. So what's the resistance associated with this very thin, shallow junction depth, XJ, so this little resistor right here? So the current comes out of the channel, has to flow in that source drain extension before it hits the silicide. Well, again, that's a very similar situation where we just need to calculate a sheet resistance. And from the sheet resistance, we figure out the number of squares from the geometry. And we get the resistance. So again, so for the source drain extension, for that reason, we're told the resistive rho is 5 times 10 to the minus 4 ohm centimeter. And we know the number of squares. And we have a thickness of that region. So I can get the sheet resistance. Again, it's just a resistivity divided by the thickness of the region. So that's 5 times 10 to the minus 4 ohm centimeter divided by-- I think it was an easy number-- 500 angstrom. So that's 5 times 10 to the minus six centimeters. And that goes out. And we end up with about 100 ohms per square. That's a typical shallow junction for a shallow junction, shallow being 500 angstroms or less. You dope it very heavily. We said this is doped 210 to the 20 with arsenic or whatever. And it's about 100 ohms per square as the sheet resistance. Now, again, Looking at this, how many squares is this? Well, the distance from this point in the channel to the point where it contacts the silicide that we're told, that spacer width is 0.1 microns. So that's 0.1. And into the board is 1 again. So that's a tenth of a square. So then we can just get the resistance of the source drain extension. Then it's just the sheet resistance times the number of squares. So that's 100 ohms per squared times 0.1 squares. And then we end up with 10 ohms. So that resistors, roughly-- I mean, this is a very crude estimate, that resistor here. So this resistor here is about 10 ohms. This little sheet here represents about a couple of ohms. The big resistance is what? The big resistance is the 100 ohms. It's the contact resistance because we have a finite specific contact resistivity, 2 and 1/2 times 10 to the minus 7 ohm centimeter squared. And we're trying to make contact in a very small area. So what's the solution? Well, I can make the area bigger, make the devices bigger on the chip. I get fewer devices. That's not good. Or we have to scale this number. According to the ITRS, remember, this number is going to have to go down. In 2003, ITRS wants this number to be about 2 times 10 to minus 7. So it's already supposed to be lower than that. And over the next 10 years, it's supposed to be lowered by a factor of 3 or 4. So this number of 100 ohms gets lower or it certainly does not increase as we scale the devices. But just kind of gives you a rough idea of what the big contributions are if you do it right. Again, if you didn't get enough doping in this shallow source drain extension, this 10 ohm number could go up, depending on your junction depth, how you scale that, and your doping. But these are simple calculations you can do to back of the envelope. And we didn't get to go through that last time we presented it. But this gives you an idea of how those numbers work out. Any questions on that or on the contact? We only spent one lecture on contacts. So there's a little bit of this in chapter 11, although I don't think there's a specific example of a MOSFET like this. OK. So let's go on then to today's actual material, which is going to be a little bit of a divergence, to a certain extent, from the other lectures. This material is not in the text. This is all more up-to-date research material that's come out and has not yet been put into a textbook. But you have the handouts. So what I want to talk about-- what we've talked about so far for semiconductors is just using pure silicon, pure silicon wafers and pure silicon materials to make CMOS, to make MOSFETs. It turns out, if we use epitaxial growth, we did spend a little time talking about epitaxial growth of silicon by CVD or some similar technique. If we use epitaxial growth, you can mix in small amounts of germanium into the silicon material and make an alloy called silicon germanium. Or you can mix an even smaller amounts of carbon, maybe 1% or so, and make an alloy of the silicon and carbon. And people are doing this. And some of these alloys are used today in modern production devices in manufacturing. So I just wanted to go through some of the important material electronic properties of these newer semiconductors and how the layers are processed to a certain extent and what some of the issues in research and development that people are considering. So here on slide two, what I'm showing are some applications of these silicon germanium alloys and related materials like silicon germanium carbon. The first bullet is for a number of years now, maybe the last 10 years, silicon germanium has been used in production as the base layer in a heterojunction bipolar transistor, in a bipolar transistor, the base layer being quite thin. We haven't talked much about bipolars in this class. This class is mostly about making CMOS devices. But it's a thin layer, a couple hundred angstroms, maybe 200 to 500 angstroms thick. This is used in high frequency analog applications, generally telecommunications applications, base stations, and things like that. And there are a number of companies, maybe half a dozen or more, that manufacture these and sell these devices. So for the purpose of growing thin films, this is something that has been done now in manufacturing for some time. The second bullet refers to a little more advanced type of material. And this just coming in to production now. And these are layers of relaxed silicon germanium or strained silicon germanium-- and I'll talk about what that means in a little while-- that are used to induce strain in silicon. So these layers are used to take silicon channel and to either stretch it or compress it mechanically. And by stretching and compressing the lattice in the channel, you can enhance the mobility as it turns out. Enhancing the mobility means you get a higher current drive for a given voltage so the devices will switch faster. So for enhancing CMOS, some of these materials are now in production. Intel has a new Pentium that's in production using this type of technique, these materials. Sometimes, silicon germanium can be used as a boron diffusion barrier. We spent a lot of time in this class talking about the fact that boron is a fast diffuser in silicon. That's one of the problems with making shallow junctions. Well, it turns out, in silicon germanium, for reasons that are not completely understood, the more germanium you add, the slower the boron diffusion coefficient. So if you put in 20% germanium the lattice, the boron diffusivity drops by about a factor of 10 in silicon germanium. So that's a huge factor. So people may want to use silicon germanium, just as a means of slowing down boron diffusion. It's used in micromachining. In MEMS technology, it happens to be an excellent etch stop. So there are etches, wet chemical etches, primarily wet, maybe a few dry, that will stop on silicon germanium and etch through silicon and stop on silicon germanium and vice versa. So it can be used as a sort of an etch stop layer. Some people are advocating using-- I don't think it's in production but I know of-- but polycrystalline silicon germanium as the gate material. We've talked in this class the last few lectures about metal gates. Before they get to metal gates, people were advocating using a polycrystalline silicon germanium instead of polycrystalline silicon. I don't know, again, if that really is making it into production. Silicon germanium is also used as an infrared photo detector material for integrating photodetectors with silicon CMOS. Silicon germanium carbon is in production now, maybe not silicon carbon, but silicon germanium carbon is in production with very small amounts of carbon, maybe a half a percent, as a way of suppressing transient enhanced diffusion in heterojunction bipolar transistors. So think somebody's doing a report on that. One of the people signed up for that. So that's kind of a hot topic, the use of carbon to completely suck up all the interstitials. And then that completely changes the TD effect. So these materials have quite a bit of application. So on slide 3, this looks like a long outline. But I'm just going to touch on-- after I introduce these the materials. I'm going to touch a little bit on the MOSFETs and the applications of strained silicon and strained silicon germanium to CMOS. I'll talk a little bit about how the materials are grown and about how the dopants diffused in these materials and a little bit about some newer material where we're putting them on insulator. Strained silicon on insulator is a very kind of a hot topic of research these days. Well, if we go to slide 4, this is more of a motivational slide. The question we're trying to ask is, if you have a CMOS circuit, maybe inverter or something, how quickly can it switch from one state of the zero state to on state-- the zero state to the one state, for example? And this the delay in doing that is something called the gate delay. It's usually measured in picoseconds. And that gate delay is something we want to minimize. We want to minimize the switching of each gate in the logic sequence so that the overall circuit can do computation faster. So the gate delay generally, if you've taken classes like 6012 or you've had some other undergraduate classes on CMOS, as you scale the gate length-- that's why we're making the gate length shorter and shorter-- the gate delay goes down. This is actually some data showing how it does indeed scale at 0.1 micron. The gate delay is down in the 5 or so picosecond or some 5 picosecond range. Well, it turns out that the gate delay is inversely proportional to the current. So if you can pump more current through each device, then as one device charges, the device after it has to charge up a device. The more currents going through, the more quickly it will charge its neighboring device. So the name of the game in getting higher speed in logic devices, at least for digital, is at a given voltage, to get more drain current out of the device. And so you say, OK, at a given voltage, how can I get more drain current? Well, this is what the drain current looks like. There are several physical variables it depends on. It depends on the mobility of the carriers, directly proportional. It's inversely proportional to the oxide thickness. So if I make the oxide thinner, I get a higher capacitance, and the current goes up. That's why we're talking about gate dielectric scaling all the way down to below 10 angstroms. So that's where the gate dielectric scaling comes from. And if I make the channel shorter, it does also go up. So how do people do conventional scaling to increase ID or speed for logic? They make the gate length shorter. They reduce the gate oxide thickness. The real question has been in the last five or 10 years, what about the mobility? Is that a variable that we can tweak? We're pretty much done with scaling the gate oxide because we can't get much thinner, only a few atoms thick. Making the transistor shorter is getting harder and harder lithographically and for electrostatic reasons, don't have too many more variables in this equation. So mobility is the last thing to touch. And that's where the silicon germanium alloys come in. Here on page five, what I'm showing is some classic data on the literature of what the electron mobility that number. Mu N is not a single number in a MOSFET. So this is a plot of the effective electron mobility in a channel. And again, as this number goes up, the current goes up at a given voltage. As a function of the vertical field in the device, the vertical field is induced by the gate. So as I put a gate bias on the device, as I bias the device, the gate bias, I get higher inversion charge. And so in fact, the mobility actually goes down. But so you notice it's a function of the vertical effective field in the silicon and that modern MOSFETs are operating somewhere in this range right now. I can just tell you, based on the gate oxide thickness and the typical voltages, 1 to 3 volts that MOSFETs are operating in, if you calculated out the effect the field in the channel, it's about a mega volt per centimeter. So modern MOSFETs these days, the electron mobility is right about on this curve, somewhere between 200 and 300 but, closer to 200. A number of years ago when we used lighter doping, so let's say you're using much lighter doping like 317, and the oxides were thicker, the vertical field was much lower, maybe half of what it is today. And the mobility was twice. It was what 400. So this is the problem with mobility. As we march down the ITRS, we scale that gate oxide thickness. So we get higher capacitance. That's good. But we also increasing the doping in the channel, and we're increasing the vertical effective field. So we're marching down this curve. And the mobility is going lower and lower and lower with each generation. That's bad. That takes away from our current drive, to a certain extent. So that's a problem. So what people have decided to do is OK, it turns out, if I can change this mobility curve, in fact, I can push the whole curve up quite significantly if I put stress on the silicon. And I'll say a little bit more in the next few slides about how that works. So people have tried a number of different ways to stress out the devices or to put stress on the silicone. You have to stress it in a certain way. And this schematic kind of gives you an idea of a good way of inducing stress. For electrons, this looks like just like a regular nMOSFET. But you notice I've highlighted this channel region in yellow. And it's called strained silicon. A classically strained silicon was generated by taking a wafer of silicon germanium or a very thick layer of silicon germanium, say with 20% or 30% germanium, silicon germanium has a larger lattice parameter than silicon. Pure germanium is about 4% bigger. So this 30% layer is going to be a little bit larger in its lattice parameter. So if it's relaxed, it's bigger. And if the silicon layer is thin, when you grow it epitaxially, the silicon lattice will actually stretch in the X and the Y directions to match that of the substrate, as long as it's thin enough. It's energetically favorable to stretch the bonds instead of breaking them. So the silicon is stretched. This lattice parameter in the X and Y plane is about 1.5% bigger than it would be in normal silicon. That's a huge amount for it to take a crystal lattice and stretch it by 1.5%. That's a lot. As a result, when you stretch it in the X and Y, it actually compresses a little bit in Z. So in fact, we call this particular configuration uniform biaxial tensile stress. It's biaxial and it's tensile, is being pulled in X and Y. So this is sort of the classic way that people have introduced strain and made MOSFETs. And they've looked at the mobility as a function of the amount of the strain. There are other methods though for straining. Most of what I'll talk about in this lecture is this epitaxial growth method. It's the easiest one to get a handle on the physics. But there are other methods. People have used, for example, when you make a MOSFET, you don't just make an individual MOSFET on a device. All around the MOSFET, we do something called shallow trench isolation that we talked about in this course. We dig out a trench on all the way around the MOSFET. And we fill it up with oxide or something, some insulator. Well, IBM show back in '97, depending on how you fill that up, you can potentially induce stress, either compressive or tensile, depending on what material you put in because you're digging a moat around the device, and you're putting some material in it. So you can actually use the STI, so-called, to induce stress. There are other processes during fabrication that Intel is using now where they actually dig out the source and drains. They cut them out with etching. And they fill them back up with silicon germanium, which compresses the channel, which helps the hole mobility. So people are doing things like that. You can actually stress the device. On top of the device, they put a layer of high stress nitride and try to induce stress by applying films that are high stress. You can also bend the chip. There's a small startup company in the southeast of the United States where they're taking chips, thinning them, and actually trying to pin them mechanically during stress. It seems a little crazy. But it may be possible. But epitaxial growth is in some ways the most well controlled. And the physics is a little bit easier to get a handle on. So let me just mention here on slide 7. This is the IBM work in 1997 where they talked about in STI-- we learned in this course, you fill up the shallow trench with-- well, first, you oxidize the liner. Then you fill it up with LTO or some low temperature oxide. What they say was they grew an oxide liner for isolation. And then they fill the trench with polysilicon, which they thought would induce a lot of stress, compressive stress, pushing in all around the edges of the device. And then they looked at the hole mobility. And indeed, what they found, particularly on SOI layers, which are very thin, when they made the device width very small, say 1 micron compared to 10 microns, so you make the device width small, that brings the shallow trench in closer and closer to the center of the channel so it can have a bigger effect as it's pushing in. And they saw that the hole mobility went up quite a bit. On a 10-micron device, maybe the mobility was here, about 100 or 110. It went up by about 34% to about 140 just by doing this. Now, maybe it wasn't such a great idea to put that much stress and use polysilicon. It wasn't necessarily a practical thing. But it just shows that depending on how you do your processes, how you do your STI, it can have a pretty big effect, particularly on the hole mobility. So this was an indicator early on that people need to pay dramatic attention to exactly what thermal budget you use, what materials you use in STI every layer you put on that device. Now that the devices are small-- the width of devices today is less than a micron. Now that they're so small, edge effects can have a huge impact on the stress in the center of the channel. And that can end up impacting the mobility. So I refer to the fact that we have this strain. And I guess I should have introduced slide 8 a little bit earlier. But this just gives you a picture, a ball and stick picture of what we're talking about. So this ball and sticking up in the upper left is meant to represent cubic silicone because it has a lattice parameter of about 5.4 angstroms. It's the diamond cubic structure. But nevertheless, it has a cubic symmetry to it. X, Y and Z in those three dimensions, the lattice constant parameter is the same number. Cuprate germanium is larger, same cubic structure, but it's larger by about 4%. If you make an alloy anywhere in between, and you have it thick enough that it relaxes, pretty much can use a linear interpolation. It's not quite perfect. But you can use a linear interpolation. So at 50% silicon germanium or 50% germanium, the lattice mismatch is about 2%. So there are two different types of silicon germanium that people tend to talk about. One is when you start with a silicon substrate and you grow a very thin layer, say a few hundred angstroms of silicon germanium on silicon, what happens is you're taking this lattice here, this cubic silicon germanium wants to be cubic. It's larger. It wants to be larger in equilibrium than silicon. And you're squeezing it though. It turns out, again, if the layer is thin enough, it's energetically favorable for the in-plane lattice parameter of this material that you're growing to match that of the substrate. So the silicon germanium is kind of squeezed in X and y. And it pushes up in Z. So this layer then is what we call a strained silicon germanium on relaxed silicon. And it's under biaxial compressions. Being compressed in the XY plane. Now, if we make it thick enough, eventually, there's too much stored energy in those stretched bonds, and the bonds will start breaking and get dislocations, and it'll relax back to its cubic structure. But for thin layers, you can achieve this sort of pseudomorphic structure. You can also do kind of the opposite, or the upside-down experiment. You take a wafer of silicon germanium that's very thick, that's already relaxed, that has a larger lattice parameter and try to stick silicon on it in epitaxial growth. Again, if the silicon is thin enough, it will now stretch in X and Y to match the lattice parameter of the substrate. And it'll compress in Z. So we call this silicon layer, epi layer as being injured biaxial tension. And again, if you grow it too large. The bonds will no longer stretch. They'll break. You'll get dislocations. So it can go back. Instead of being stretched, it'll go back to its normal lattice parameter. At that point, it'd be relaxed. So this is the type of material you'll see a lot in FETs. And you see this material in bipolar transistors and some p-channel MOSFETs. So what are some of the important properties of this strain? I already mentioned to you, for MOSFETs, when you start stretching the silicon lattice parameter or compressing the parameter, you can improve the mobility. But there are other parameters that you change when you introduce strain or when you put silicon germanium in. Probably one of the most important parameters you need to know about is the band gap. So there is an energy band gap difference between silicon and silicon germanium. If this represents the band gap of relaxed silicon, so EC is the conduction band, EV is the valence band, this is the band gap of strained silicon germanium. You see it's much smaller. And most of the difference occurs in the valence band. The valence band energy is larger. So the silicon germanium has a smaller band gap. How much smaller? Well, you can look on this curve. This is the difference between the band gap of the two as a function of germanium content. So here, if I have strained silicon germanium, you should use its upper curve, not the one that's as unstrained. So a 20%, the band gap shrinks by about 150 millivolts roughly. So and you can read that right off the curve. Here at 50%, it's another number, maybe 350. So you have an idea of how it moves. All these data points are our data from the literature. There are people extracted the band gap by various devices, diodes, bipolar transistors, photodiodes. And the solid line is some theory that was published a long time ago. So now, the other thing, that's if it's strained. If it's unstrained, if you grow the silicon germanium so thick that it relaxes back, so it comes back to this cubic structure, then what happens to the band gap? Well, in fact, the band gap difference is a lot smaller. When it's unstrained, the band gap difference say a 20% is only about 100 millivolts. So a big fraction of this band gap offset, this band gap difference, is due to the strain. It's due to the lattice parameter change. A certain fraction is due to the fact that we're just adding germanium. So you need to know whether the material is unstrained or strained or somewhere in between in order to figure out what the band structure looks like. And in fact, on slide 10, I don't have to go through this in great detail. If you haven't had energy band theory, it's a little tricky. But slide 10 gives you an idea of how the strain does affect the energy bands and the lineup. And in fact, if we look at the case for strained silicon germanium on silicon, so it'll be on-- remember, we said when we grow strain, so if the germanium is under biaxial compression in the X and Y planes being compressed, strain the silicon germanium on silicon, the band lineup looks like this, as I mentioned. Most of the band offset is in the valence band. So the holes will tend to be confined in the silicon germanium. There's very little band offset. The lowest energy conduction band is about aligned in strained silicon germanium versus cubic silicon. So there's not much opportunity for confining electrons. So this material, it turns out, is ideal for a heterojunction bipolar transistor for an NPN. This material has been in production for about 10 years. People use it in high speed RF devices. So that's kind of a more well-characterized and well-known structure where the germanium fraction in a typical HBT you would buy off the shelf right now is probably about between 10% and 20% of the atoms would be germanium in an HBT. On the other hand, if you grow the opposite structure, so you have relaxed silicon germanium substrate and you grow a thin layer of silicon-- remember, we said this was biaxial tension-- the band line up is quite different. In fact, you do get a sizable difference in the conduction band position. And in fact, the conduction band is lower. The energy is lower in strain. So the electrons will tend to go into the strained silicon layer. So if you make a FET, and you make this as your channel, the electrons want to go into the strained silicon. The holes want to be in the silicon germanium side. So this is what's called a type II band alignment. So you can make lots of interesting devices just by taking silicon and straining it on unrelaxed silicon germanium. The other thing that happens here that's important, and we'll talk about when we talk about the mobility enhancement, you notice what's happened is the conduction band is actually split into two levels. Originally, there was only a single energy level. And that splitting is what's really responsible, to a large extent, for the mobility improvement. If you really wanted to get nitty gritty, I won't go through it. But here on page 11, I've made a reference to a famous paper by Chris Van de Walle and Richard Martin published in the mid 1980s. Van de Walle and Martin actually theoretically did a pretty good job of trying to calculate what the energy band structure looks like for strained silicon germanium on cubic silicon, which we're showing over here and also for strained silicon germanium on cubic germanium. They have both of these. And I won't go through it in any great detail. But if you're interested in the solid state physics and you end up working in this area, this is a good paper for you to look at. Again, the band gap would be the distance between this point here in the conduction band and a point here in the valence band. So they actually calculated how the valence and conduction bands move as you add strain and germanium to the system. On slide 12, I'm also giving you a reference for not only has the band gap changes, but people have actually made some reference some measurements of the energy band offsets when you have strained silicon on relaxed silicon germanium say for a MOSFET. This is taken out of Jeff Welzer's thesis from 1994. And what you can see here, this is the basic structure. You have an oxide. You have a thin layer of strained silicon. And you have these energy offsets here. The energy conduction band energy is a little lower. And the valence band energy is slightly different in strained silicon. And these numbers, delta, you see how different they are, these energy band discontinuities were actually measured and delta EV here as a function of the germanium fraction in the substrate. The bullets are the measurements. And the lines are the calculations from Van de Wall and Martin. And indeed, you get pretty good agreement. So we have a rough idea now of what these energy band structures look like when I add strain or when I add silicon germanium to the substrate. So that's important for figuring out how all these devices work. I mentioned that when you squeeze silicone, either you stretch it in a biaxial sense, you can improve the mobility. And the question is, well, why is that? I don't want to go into detail in the solid state physics. But it turns out, if you put silicon in biaxial tension, you break the cubic symmetry of the lattice. Ordinarily, in the lattice, the XYZ directions are all completely symmetric when silicon is not strained. When it strained, they actually become asymmetric. X and Y lattice parameter is larger than in the Z. And what happens by that breaking of the physical symmetry is that the sixth energy of ellipsoids in constant energy space that are all at the same energy actually split up when you introduce a strain. And two of them, these two ellipsoids here where the electrons can be, these two red ones, the so-called perpendicular valleys labeled Del 2, they actually end up being at a lower energy than these other four. So we get this strain-induced splitting in bulk silicon. All six of these valleys that represent states that electrons can occupy, all six of them are at the same energy level. But by group theory, when we break the symmetry, we break that up into a two-fold degenerate level, which is lower energy, and a four-fold degenerate level. So by breaking this up, actually, what happens is now, most of the electrons want to be in these red valleys. And the scattering between the red and the green is suppressed because it costs you energy. So instead of scattering between all six, you can only scatter very effectively between these two red ones. And so the scattering time goes up. And the mobility is directly proportional to the scattering time. The in-plane effective mass also goes down. So the mobility also goes up for that reason. So just by breaking the symmetry, we can change the occupation of the energy bands in silicon. And we can make the mobility higher for electrons. In the valence band, I'm not going to go through in great detail. But it's quite a bit more complicated. But it turns out, with strain, we also split the degeneracy in the valence band. We also suppress interband scattering. It is a lot more tricky. But it is possible to also induce improved hole mobility or PMOS mobility by introducing strain. And in fact, here on slide 14, I'll just walk through some of the older data. This was from 1992 by Welzer. These were the first strain of MOSFETs that were fabricated where he made a thin layer of strained silicon on relaxed silicon germanium and then just implanted it and made a silicon MOSFET with a source and drain. It was an nMOSFET. And this, he called the surface channel MOSFET because the bands aligned so that the electrons wanted to be at the surface. So the channel is at the surface. This he called the buried channel MOSFET because if you look at the energy band diagram here, there's a little bit of silicon germanium on top. And that confines the electrons to be a little bit buried below the surface. It's like a MOSFET but where the channel is slightly buried. And on slide 15, he actually measured some of those mobilities. These were the first mobilities measured. And if you look at this solid line, this is the mobility measured in a MOSFET that had no strain. And for a surface channel MOSFET, the mobility that he measured was about, oh, I don't know, 80% higher, about a factor of 1.8 improvement in the surface channel. So right off the bat, as a function of vertical field, you see we have parameter. By changing the strain, you can up the mobility by about 80%. If you do the buried channel, you can get even higher mobilities at low gate bias. But the mobility seems to go down at high gate bias for various reasons. So kind of the first evidence that strain makes a big difference. These are the actual structures that one can grow shown on slide 16. Again to make an nMOSFET, you can grow relaxed silicate germanium with a thin layer of strained silicon. And this becomes a high electron mobility channel. This slightly different structure is needed for the PMOS, very similar but with slight variations, so you can make CMOS devices using this type of technique. And I'll skip through the energy band diagrams for now. Slide 17 shows some metrology. The question you may be thinking, well, how do you really know? All right, your mobility went up. But how do you really know that the layers are strained? Especially when they're so thin, how do you measure strain in a layer that's only 100 angstroms thick? You really can't use X-ray diffraction. X-ray diffraction on 100 angstrom layer could be really tough because the layer is very thin. So how do you measure that change in the lattice parameter? Well, it turns out there's a technique called Raman scattering, which is an optical interaction. There are certain vibrational modes of a lattice that will scatter a scatter of photons. And the Raman effect can be used to measure the strain or the change in the lattice parameter, even a very, very thin layer. So here's an example of a MOSFET that was made, a Raman spectrum. So you're measuring the intensity of the scattered light as a function of the frequency in inverse centimeters. And you see a large peak here associated with relaxed silicon germanium. And there's a small peak here, which is the thin strained silicon layer. Now, if that layer were unstrained, bulk silicon, if you just take a wafer of bulk silicon and subject to Raman scattering, you get a line at 521 inverse centimeters. So it turns out the shift of the strained silicon peak, the shift between that peak and 521 is a measure of the amount of strain. So this is a very nice way you can oxidize a wafer, strip the oxide of strained silicon. You can see if the strain is still there. In fact, you can figure out the relaxed silicon germanium content as well using this. So slide 18 just shows you some measurements that people have made, the Raman peak shift and the frequency shift in inverse centimeter as a function of the amount of germanium fraction of the substrate. For strained silicon layers, you can see there's a theoretical line here. And the bullets represent measured points. So using Raman, if you have the right kind of Raman setup, you can actually get a peak shift. So for 20% germanium here, we expect a Raman peak shift of about 6 to 7 inverse centimeters. So that gives you an idea of how you can measure the strain in the material. Slide 19 shows you a material that's been oxidized. You might be thinking, OK, fine. You can grow these epitaxial layers. They'll be strained. What happens if you oxidize the silicon? Is it going to somehow relax the strain because of that oxidation process and the breaking the bonds and the oxidation? Well, it turns out it doesn't. In fact, the as-grown Raman spectrum here, this is for the epitaxial layer as it was first grown, you do get a strained silicon peak right here near to 510. If you oxidize it at 850 for 10 minutes, you still get that strained silicon peak in. Although, the peak is smaller height because, of course, the layer is thinner. When you oxidize, you consume silicon. But the peak position, it remains at 510. It remains at the same point. And this has been verified very many times. And, of course, in the MOSFET, you do see the enhanced mobility. So there doesn't seem to be any significant strain relaxation, at least when you oxidize in these type of temperatures. If you were to go up to 1,000 or more, then you could start running into trouble when the germanium starts diffusing around. So on slide 20, I'm just reviewing for you, giving you an idea of some of the early measurements people made of how the mobility was enhanced. And the left side is for nMOSFETs. And you see this black line, this dashed line is for control devices. And these other lines on top are for 10% germanium in the substrate, 20 and 30. You see, as you add more germanium in the substrate, the whole of the electron mobility curve all moves up. And you get a nice enhancement. On holes, for PMOS vets, there was some enhancement measured, although the enhancement is diminished. You see the enhancement factor is actually going down as you go to higher vertical fields here. So at higher vertical fields is where a lot of modern MOSFETs operate. So that's a bit of an issue for pMOSFETs for using this technique. Those are research devices. This on slide 21 is some more recent publications in a manufacturing style process. Although it's not in manufacturing, the process was a full manufacturing style process that IBM did. It had all the attributes of a real manufactured device. It had shallow trench isolation. Had it had weld implants, halo implants, raised source drains by EPI, and a thin gate oxide. So this is a real device as opposed to the university style. And again, you can see the mobility for electrons goes up with strain, about 100% improvement. That is a factor of 2. Holes, there is an improvement at low gate bias. But here at high effective field or high gate bias, the improvement is pretty small, only a few percent. That just shows the state of the art as it exists a couple of years ago. And if you were working at relatively low gate bias, and you just want to get an idea of how the mobility enhancement varies with the amount of strain, you can look at this curve on page 22. This is the electron mobility enhancement factor. So it's the peak mobility in a strained silicon FET divided the peak mobility in an unstrained FET. And you see here with no strain, that it's 1. And then it goes up to about a factor of 1.8, maybe 2, something like that. There's a lot of data have been published in the last 10 or 12 years. And they all seem to agree about this saturating behavior. On the right is for loss for hole mobility. The data does show an increase, especially as you get above about 30% germanium. You can get larger enhancement factors. And this is the theoretical curve. The data seems to be a little bit below the theoretical curve but shows roughly the kind of strain dependence you would expect. And so what this is telling you, if you're saturating out, you don't really need to put more for electrons, more than about 20% germanium in the substrate. That's enough strain. So the bands are split enough, if you go beyond that, you're inducing more strain and more splitting. But the mobility itself is not actually improving because something else is limiting the mobility. But it gives you a rough idea of the amount of strain need to induce in some of these devices. Page 23 is a summary of some of the mobility enhancements now for pMOSFETs. It's the mobility enhancement ratio. Again, this is the number you would like to be large as a function of vertical field. If you operate the device at very low vertical fields, you can get large ratios. But most modern devices operate over here. So the enhancement factor is only about 1.1 or 1.2. It's approaching 1. That's a problem, unless you go to very high germanium contents. There's some data here with a 40% substrate. So that means the strained silicon layer is very thin, and it's strained to a 40% substrate. Here, in mobility enhancement factor, there's not a full curve available. The highest field is people went to was about 0.6. But they did get pretty large enhancement factors. So it looks like if you're going to use biaxial tensile strain in the silicon, you need to go to very high levels of strain if you want this to work. So that's one big disadvantage for the pMOSFET in this technology. Oh, just another graph on page 24 to give you an idea. We talked about mobility. How about things that you care about electron velocity or GM that a lot of circuit designers are worried about? Well, indeed, that does improve. And even as you scale devices, this is data back from 1998 now. It's fairly old. But it still gives you an idea of if you look at the GM here for an unstrained device shown in red, as you scale, it goes up like this. For a strain nMOSFET shown in blue, it goes up. And this enhancement factor or the ratio between these two lines stays about constant. In this plot all the way down to about 90 nanometers or so was the shortest devices that were made, just around 100 nanometers. Since that time, shorter devices have been made. So the nice thing is that the enhancement doesn't seem to go away when you go to short channels. So that's very important for the technology. So let me talk a little bit now, I want to spend a few minutes on how the material is grown and some of the issues related to how dopants diffuse in this material. Slide 26 just shows some typical gas sources that are used. A very common way to grow this is by low pressure CVD or sometimes ultra high vacuum CVD, depending on the particular lab that's doing the work. For epitaxial silicon germanium, a very common silicon source is silane. Sometimes people use dichlorosilane. We've talked about dichloro earlier in the class. For germanium, people use germane. What is germane? It's the same as silane. But you take out the silicon, and you put in a germanium atom. So it's GeH4. Hydrogen is often used as a carrier gas, although not in UH BCVD but in low pressure CVD. And these are common dopants. Diborane and arsene are used for P and N type doping, sometimes phosphine. For silicon germanium in a selective process, you need to add-- so if you want to deposit silicon germanium on a wafer that has oxide on it and little openings in the oxide where there's windows that are patterned so that you can reach down into the silicon, this is called selective deposition. You need to add hydrogen chloride, HCl. And this is commonly done for both the bipolar and CMOS applications. For silicon germanium carbon, you do the same type of thing. But you add one more gas, which is called methyl silane. And that the methyl group, the CH3, brings in the carbon, which is used to dope those devices. We talked a little bit about strain. And here on slide 27, I'm showing a little bit of idea what that means. This ball and stick diagram in the lower right was our pictorial picture in our heads. When the EPI layer is thin, we said the silicon germanium, if it's thin enough, it will compress in X and Y. It will match the lattice parameter of the thick substrate. That's only when it's thin enough and it's energetically favorable to bend the bonds. In that case, we say it's fully strained. If you build up more and more layers, you build up more and more stress, at some point, it becomes energetically favorable to break bonds. So at this interface, bonds are actually broken. And it's represented here by these misfit dislocations that are going into the board or into the board here. And so these misfit dislocations end up relieving the strain on the thick layer. They have a certain spacing. And if you have enough of them and their spacing is tight enough, in fact, you will end up with a fully relaxed layer. The silicon germanium will have a last parameter that it has an equilibrium. And this is a cross-section view in the upper right. And this transmission electron micrograph is a planar view. So if you look down through an epitaxial layer that's been grown to be too thick, and here is a bird's eye view. This is a misfit dislocation segment along the interface. And there's a dislocation arm that threads to the surface. And these dark lines end up having extra electron diffraction contrast. And these dark lines correspond to misfit dislocations at this interface. And they run along the easy slip planes in silicon and silicon germanium, which they run along the 110 direction, so just typically parallel and perpendicular to the flat. So this film was grown to be a little bit too thick. And it started to form dislocations. It's no longer fully strained. In fact, you may be wondering, oh, well, how thick is too thick? Well, there is no one number, unfortunately. There for a long time, people thought there was a quote unquote critical thickness so that at a particular germanium fraction, when you go above that thickness, bingo. You get dislocations. Doesn't quite happen that way. In fact, there is a whole metastable regime. And the critical thickness depends on the temperature at which you're doing the growth because it's a kinetically limited process. And this is a good paper, Derek Houghton's paper from 1991, who describes the kinetic model and explains the fact that there's no one critical thickness at a given germanium fraction. What he plotted here was a summary of the critical thickness above which dislocations form as a function of germanium fraction of the silicon germanium layer for different growth temperatures. So if you're growing at 900 degrees, you're very hot. And it's very easy to nucleate dislocations. And they travel very rapidly. You tend to then get the equilibrium critical thickness that that's this line here. So if you're growing at high temperature, or you're annealing at high temperature, if you're at 20%, the equilibrium critical thickness for silicon germanium is about 120 angstroms, 110 angstroms, something like that. So it says that if you're growing in high temperature, and you grow above about 120 angstroms of 20% silicon germanium, it'll start to form dislocations at that point. However, if you are growing that same germanium fraction at 550, so you're in a very low temperature epitaxial growth, you're are kinetically suppressing the formation of dislocations. In fact, the critical thickness is like this curve here. It's about 700 angstroms. So there's a metastable regime, if you grow the layers thin enough, you can keep them strained. And they won't form dislocations until a much thicker layer. The problem with that is, you can grow them very thin at high temperature, very thick, and they won't have dislocations. But if you go to process them it to make a device, at a higher temperature, dislocations can pop in if you're above the equilibrium critical thickness. So this region is called metastable because you can grow them at low temperatures without dislocations. But if you were to anneal them at a higher temperature dislocation, it would fall, and it would start to relax. Generally, this region up here, up in the upper right, so it's very thick films, are almost always relaxed. They'll always have dislocations. I refer you to this 1991 paper for a good reference on understanding the formation of misfits in strained silicon germanium layers. Slide 29, it turns out, in the early 1990s, there was some breakthroughs on, you say, well, growing strains silicon germanium. That's very interesting. People want to do that. There were actually some breakthroughs on how to grow relaxed silicon germanium. This seems like it should be very easy to make the silicon germanium relax. You just grow it thick, right? We just saw that paper from Derek Houghton. Well, if I'm at 30%, I just grow it to be 3 or 4 microns up here. It'll definitely be above the critical thickness. It will be relaxed. It will have its own lattice parameter. The problem with that, if you just grow it right off the bat like that, a box layer, you end up with a huge number of threading dislocations. So if you don't grade the profile, you end up with more than 10 to the 9th, maybe 10 to the 10th of these threading arms. For every misfit, you have two arms that go up to the surface. And you have so many of them that the layers aren't very useful for very many devices because there's too many defects. But in the early '90s, there were breakthroughs on how to grow these layers, how to grow them in high enough temperature so the threading arms could run to the edge of the wafer and how to grade the germanium content. So you start growing at silicon lattice. And then you gradually add germanium as you go up and you grade it over many microns. It turns out, when you do that, you can relax the silicon germanium. But you can reduce the number of arms that thread to the surface. All the arms run out to the edge of the wafer where you don't care about them you're not going make any devices at the edge of the wafer. So the threading dislocation density can go down from 10 to the 9th to less than 10 to the 5th if you do the grading right. So a nice thing about this is all of a sudden, on silicon substrates in the early '90s, people had a way to take a silicon substrate, grow a relaxed silicon germanium layer, and have it be pretty high quality and reasonable enough high quality that you could make devices on it that work. There's still a lot of threading dislocations, but not enough to screw up the devices too badly. So that was kind of the breakthrough that enabled people to make relaxed silicon germanium and then strained silicon MOSFETs on top of it. What are some of the materials issues if you're growing this relaxed silicon germanium? Well, I already mentioned the threading dislocation density. It actually decreases if you go to higher growth temperatures because the dislocations can move faster. The problem, though, is you also tend to get a very strange field that creates a crosshatch. In fact, if you look a plan view optical micrograph of a relaxed silicon germanium layer, it looks like a plaid shirt. This plaid is that buried strain field causes the surface to undulate ever so slightly, maybe 100 angstrom, high hills and valleys, very smooth. And the period of these undulations are the spacing is quite large. It can be microns. So these are smooth undulations. They don't really mess with the electron mobility. The electrons and holes don't care about them. It makes the wafers look ugly, however. And it makes lithography difficult because optical machines don't like looking at plaid. They like things that look smooth. So it is a practical problem. People have found a ways around it. You can CNP the layer, and you can remove all the undulations. And it's not necessarily a big problem. But it is an issue. If you grow the silicon germanium too hot, it turns out it doesn't even grow as a planar film. It grows as big little clumps, little mountains or islands. It doesn't wet the surface properly. And this type of growth mode is generally considered not very useful because it's not really a smooth film. So you cannot grow silicon germanium at too high a temperature for a given germanium fraction. What do these relaxed silicon germanium graded buffer layers look like? I said, well, they have very few defects that thread to the surface. That's indeed true. These are some cross section TEM micrographs of relaxed silicon germanium. See at 10%, this is the silicon substrate. This is the graded layer. There are lots of dislocations in the graded layer. That's what you want. You want dislocations down in the graded layer because you want to relax the lattice parameter. You want it to expand to be its equilibrium parameter. But what you see is as you get up to the surface, if the graded layer is thick enough into the box layer, there are too many dislocations of visible in cross-section PEM. Again, this is up here is where you're going to build your device. So that's what you care about. But if you compare a 10% to a 20% to a 30% germanium, you see a higher and higher density of these buried dislocations, of course, as you increase the germanium fraction because you have to relieve more lattice mismatch. So the layers get pretty ugly. Even though the device is up on top, there's still a lot of integration issues. You can imagine trying to make CMOS circuits on a substrate that looks like this. All these buried dislocations are going to worry and they're going to bother the device engineers because after all, we do have CMOS wells. We do have deep junctions in some of these structures. So this silicon germanium can affect our my shallow trench isolation fabrication, formation of silicides, and the way dopants diffuse. So it certainly is an issue. In fact, there's a whole field of research going on right now, which is the study of the diffusion of dopants in silicon germanium. And I'm showing here on slide 32, this is some data for silicon germanium alloys with 20% germanium in them. And what we're looking at is the diffusivity of different dopants. So this is the effective equilibrium diffusivity. So its diffusion coefficient in centimeter squared per second as a function of inverse temperature, well, the usual kind of Arrhenius plot. And if you look at boron, so the dashed line here is for diffusion in silicon. And the solid is for diffusion in silicon germanium. If you look at boron here, in fact, in 20% silicon germanium, the diffusivities reduced. I'd said before, a factor of 10, looks more like a factor of 6 or so, depending on the temperature. But so the boron diffusivity goes down in silicon germanium, which is nice because boron is a fast diffuser in silicon. But if you look at arsenic, this is arsenic in silicon, the dashed line. Arsenic in silicon germanium is the solid line here with the green arrow. So the arsenic in silicon germanium is diffusing a lot faster. In fact, arsenic in silicon germanium is diffusing at about the same rate as born in silicon germanium at 20%. So we've taken the boron diffusion coefficient, it goes down. Arsenic, which is usually very slow in silicon, now goes up. Now they diffuse about equally, which is kind of interesting. You might make some interesting device structures when you have symmetric diffusion coefficients between the N and the P type doping. Unfortunately, what it means is that in silicon germanium MOSFETs, we actually are much more worried about the n-type dopant diffusion because that's what's going fast. And we're less worried about the p-type doping diffusion because the boron is going slower. Phosphorus doesn't help the situation. Phosphorus is also enhanced, maybe about a factor of 2 or 3, depending upon the temperature. So and this line here gives you an idea roughly of the diffusivity of germanium. Well, this is at a very low germanium fraction as a function of temperature. It's diffusing slower than the dopants, at least at this germanium fraction. So I mentioned to you that arsenic moves faster in silicon germanium. This is actually a big problem if you're trying to make a strained silicon mMOSFET with silicon germanium in the substrate. Let's say you're trying to make a shallow source drains. So you implant 50 keV at 3 times 10 to 15, may be a typical source drain implant. And you anneal it at 1,000 degrees for various times. And the dashed lines here are for a silicon substrate and the solid are for silicon germanium. So if you just look at, say, this dashed line here, this red dashed line, for the silicon substrate with this 22nd anneal at 1,000, you have a junction depth of about 500 angstrom. That's perfect. For the silicon germanium substrate, which is the solid line, 22nd anneal, your junction depth is more than twice that. It's like a 1,200 angstroms. So the arsenic and the silicon germanium is going much more rapidly. You see this big, big shift. And that's a problem. If you use very short anneal times, you can minimize that difference. But 1,000 degrees, one second is not really enough to really repair all the damage and to activate all the dopant. So there's a very interesting issue here is, how do we optimize the annealing time for anti dopant? So we get good activation in the silicon germanium. But we don't introduce too much diffusion, otherwise, we can't control the device fabrication. Here's another issue on slide 34. I'm not trying to make silicon germanium look difficult. But it is a new material. And a lot of things change. Slide 34 shows you that indeed, the germanium itself can diffuse. Well, we're talking about taking a silicon germanium thick layer and putting a thin layer, say 100 angstroms, of strained silicon on it. That's my channel. OK, that sounds fine. We can grow that epitaxially. But now, I need to implant the source trays and anneal it at 1,000. What's going to happen to that silicon germanium, the germanium and silicon germanium? Well, it's going to diffuse up into the silicon. Once we get germanium into the channel, that's a no no. The electrons don't like germanium in the channel. The electrons can be scattered by the germanium. Mobility goes way down. And your device is ruined. You lose a lot of your strain too. So you have to keep the germanium out of the channel. Well, it doesn't sound too bad. But if you just look at some of the diffusion coefficients, in fact, these different lines, these are measured. This is the germanium self-diffusion in silicon germanium alloys for different germanium content. So this is for silicon. So this will be silicon self diffusion. This is 10% germanium, 20%, 30, 40 and 50. So if you just look at a given temperature, say at 950 degrees. Look it's what's happening to the diffusion coefficient. In pure silicon, germanium diffusing very slowly for a low germanium content, here, at around 10 to the minus 17th centimeter squared per second. But if you go up to a 40% alloy, the diffusivity goes up by two orders of magnitude. So what this is saying is basically, you get an exponential increase, it looks like, in the germanium diffusion coefficient as you increase the amount of germanium in the alloy. So it's a big issue. If you want to do strained silicon on 20%, you're going to have a silicon germanium. You have a certain diffusion coefficient. If you do it on 30%, all of a sudden your diffusion coefficient, factor of 5 or so higher. So that's a big issue because you don't want the germanium diffusing into the channel. So this data is not exactly interdiffusion data. That whole problem needs to be worked out. But interdiffusion is definitely observed for like 1,000 degrees 10 seconds at 20%. So that that's a no no. So that's going to be a big issue in how we're going to activate source strains in those kind of structures. That's an interesting research area. You might be saying, well, what is the mechanism of diffusion of germanium in silicon anyway? Actually, it's not really known. On page 35, there's an example of people are just trying to get a handle on this. This was published about a year and a half ago. And this particular experiment, they did a process where they had strained silicon on relaxed silicon germanium. And they did an anneal in a rapid thermal annealer where they put in ammonia. So they're doing rapid thermal nitratation. So what do we know about nitratation? Well, nitratation, we believe injects vacancies, right? That's what we use it. So in the samples that were nitrided, the germanium diffused right through the strained silicon all the way to the surface. And the sims profile for the blue here shows a germanium went right through that set 100 angstrom layer. It was gone. The red line is for the germanium that was annealed in an inert ambient. It diffused a little bit during this rapid thermal anneal. But ignore the surface peak because that's sort of a sims artifact. But at least it didn't go all the way. So this is a very preliminary result. And it has to be repeated. But it's sort of indicates that processes that inject vacancies probably should be avoided if you're trying to control the diffusion of germanium. And so there are a lot of interest in doing processes to inject interstitials, inject vacancies. All the same kind of tests that have done in silicon can now be done in strained silicon on silicon germanium to see how the germanium diffuses or even how the dopants diffuse. I want to say a little bit now about these MOSFETs on insulator because I've talked a little bit about how the silicon germanium is becoming a problem. How can we get around some of these problems by doing strained silicon on insulator? Well, on page 37, I'm showing an example of a potential thing you might be thinking of. Well, let's say we could get rid of-- if we go back here, let me go back a few slides, and just look at this on slide 31, this ugly looking thing with all these dislocations and that thick silicon germanium substrate. I need them to relax the strain so I can grow a thin layer of strained silica on top. But I don't really need them once the strain is relaxed. So that idea was, what if you could take this layer? The top portion looks pretty good. If you could take this whole layer and bond it to another wafer and then get rid of, etch away, all the dislocated material, then you'd have relaxed silicon germanium on insulator, for example. And all this dislocation material would have been etched off or maybe removed by a smart cut like process. I think some of you are studying smart cut. We talked about a little bit we talked about SOI formation. So, in fact, that's what this cartoon on slide 37 shows where you might have a thin layer of relaxed silicon germanium that has been generated by growing a thick layer and bonding it on. And then you could grow strained silicon on top and make your nMOS and your pMOS that way. So this is one idea that people have been pursuing to get around some of those dislocation issues. In fact, on slide 38, if you're interested, I've listed a number of different references here on how people are trying to form relaxed silicon germanium on insulator. It turns out, there are a lot of different processes, not just the bonding process. There is a SIMOX-like process where people grow silicon germanium on silicon epitaxily, and then they ion implant the oxygen. That's exactly how SOI is made. So SOI, they ion implant into pure silicon. Here, they ion implant the oxygen into a silicon germanium layer. So the SIMOX-like process, Toshiba has talked about that. There are some problems though. For high germanium contents of above about 15%, it doesn't work very well. There's another very interesting technique, also came out of Toshiba, called germanium condensation. They actually start with an SOI wafer. So they start with a wafer you buy from a commercial vendor. It would have silicon on insulator. They epitaxially grow a thin layer of silicon germanium with a relatively low germanium content. And then the oxidized the whole thing. It turns out, in the oxidation, the germanium is snow plowed forward into the layer. And the layer relaxes. So they end up with a higher germanium content after oxidation. And they have written in the layer-- originally strained silicon germanium layer ends up relaxing by the oxidation process at very high temperatures. So that seems to be interesting. There's a very similar technique called melt solidification by another group. And the bond and etch back is what I mentioned before. You grow the relaxed silicon germanium, bond it to an oxide wafer, and etch it back or smart cut it off. This is just that same process on slide 39. I won't go in great detail. But you can imagine, by bonding and selective etching, you grow your graded silicon germanium relax layer, you grow your relaxed cap, and then you flip it over and bond it to an oxidized handle wafer, just like you would in a bond an etch SOI. And then you etch everything back. And you end up with just relaxed silicon germanium after you've done CMP, relaxed silicon germanium on insulator. So that's a technique that's being pursued to form relaxed silicon germanium uninsulated. Then you can grow strained silicon afterwards. You can also do bonding and hydrogen-induced delamination or smart cut. If we study smart cut for silicon, here, we start with a relaxed silicon germanium epi layer. We ion implant with hydrogen. And at the peak of the hydrogen, bubbles form. So after we've done the bonding, we heat it up. And those bubbles end up cleaving it off. You get the delamination process. This is exactly analogous to what happens in SOI fab. And now you have silicon germanium on insulator. Of course, it has to be polished and smoothed. But it is a way to form that material. I'll skip through this at this point because we're basically, it shows, when you make that, and you make a strained silicon MOSFET, you get the same mobility that people have seen in the past. Again, I'll skip that, interest of time. And what I want to end up with is there's another approach people thought, well, why don't you just transfer the strained silicon layer itself? Why do we even have to transfer? Once you grow the relaxed silicon germanium, you can grow strained silicon on top of it. Bond that to oxide, and then remove everything except the strained silicon. And if the strain remains, then you have strained silicon directly on the insulator. You remove the silicon germanium completely. So you don't have to worry about silicon germanium diffusion and diffusion of dope into silicon germanium and all of that. So this is an ultra thin strained silicon directly on insulator sort of concept. This is just a cartoon. It's actually been demonstrated fairly recently here at MIT on page 44. In fact, this is the process that was used. A very complicated stack of epi layers was grown. But the important thing is the top of the stack was strained silicon. And that whole stack was bonded to an oxidized silicon wafer. And then everything was removed by selective etching. So when you're done, all you have left is strained silicon directly on the oxide. And you use etch top layers, so you can etch back the entire wafer and just leave strained silicon behind. And in fact, on page 45, what you see is that the strained silicon is bonded to the oxide. It's low defect density material. And if you do Raman scattering, the strain remains. Even after you remove the silicon germanium, you annealing at high temperatures, even up to 1,000 degrees, the strain doesn't go away. So the bond is very strong. So now, you have strained silicon directly on insulator. Looks just like SOI, you can process it just like SOI. But the strain is already built in. You don't have any germanium that you need to worry about. So I think that's going to be a pretty interesting area for the future. Just yesterday, I read in the paper that it had been announced that a company called Soitec is now manufacturing 300-millimeter strained silicon directly on insulator SSDOI. And once it becomes available 300 millimeter, a lot of companies will be able to use it for their production. So that's about all I have to say on that. I know I need to leave you time. I'm glad I see the surveys here. I hope you haven't been waiting outside. But please take 10 minutes or so and go ahead and fill out the course evaluation survey. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 4_Wafer_Cleaning_and_Gettering_cont.txt | JUDY HOYT: Point and start with our actual lecture. So we are in the middle of talking about chapter 4. Again, I hope you're keeping up with the reading in the text. That's part of the reason I don't assign homework continuously. Between now and Thursday, you don't have any homework. You can have time to be reading the text. Chapter 4, we're talking-- discusses wafer cleaning and gettering. Last time, we talked about some important aspects. We talked about point defects in silicon, neutral and charged vacancies and interstitials. Other important properties of silicon wafers we discussed were the carbon content and the oxygen content. We're going to talk more today about oxygen and Czochralski silicon. And we got a little bit of a taste last time of some qualitative information on clean rooms and how-- and that's-- we call that the level 1 introduction to controlling contamination on wafers. And these notes, we're going to talk about the two other major techniques for controlling contamination, which are to clean the wafers throughout the process. Every few steps, we do a wafer cleaning. And we'll talk about how that works. And then finally, the principles and modeling of a process called impurity gettering. So that's what this handout is about. Let's go on to slide number 2 of this handout. And it shows schematically this level 2 approach of reducing contamination. And that's to do-- wafer cleaning. And there are two alternate paths here indicated on the left and the right that we might take a silicon wafer through. The path on the left is primarily what we'll be talking about in this course because it pertains to the front end of the process. In the front end, by definition, you have no-- you have not done any metallization or have not put any metal contacts on the wafer. That's what we mean by front end. And that's what this course is about. So typically, throughout a process, we talked about there might be 16 or 20 mask levels. Each time you do a masking step and you do photolithography, you have this organic gloop, this photoresist, on your wafer. It needs to be removed. So before you can go on to the next process step, you need to remove that organic material, that resist. And we do that in what's called a resist strip. That's if necessary. That is if there's photoresist on the wafer. And then after you remove this bulk gross macroscopic organic contamination, then trace organics and trace metals are removed in a step called the RCA clean. And that's named for where it was invented, the old RCA Sarnoff labs in New Jersey. That's where that comes from, the RCA clean. And then you execute the next process step. So typically, the RCA clean would precede any high-temperature processing step that we'll talk about in this course, such as a diffusion step in a furnace or an oxidation step. You would have to do this RCA cleaning. By contrast, if you're in the back end, something similar happens. You do have to strip the photoresist. And you do have to do some kind of post-metal cleaning, usually before another step. But the solutions that you would use are much less caustic than the ones we're going to talk about here in the acidic and basic solutions because metals can-- you don't want to etch all the metal wiring off your wafer. So the difference with backend cleaning, it's generally more gentle. And it's not designed to remove metals because that's what you have in the back end. You have metal wiring. So let's go on to slide number 3. And this, I've taken out of your text. It talks about these-- this front end cleaning process and the steps involved. So if we start at the top, this section up here, these top few steps illustrate photoresist stripping. Again, every time you do a masking level, you typically have this organic film that's somewhere in the order of several microns of this material that needs to be stripped. And it's fairly gross. It's quite thick. It's macroscopic material. So it can be stripped in a number of ways. This is a very common way of doing it is to take hot, very hot, mixture of sulfuric and hydrogen peroxide. That's a very, very strong oxidizer. It really attacks organics, especially photoresist. Sometimes in fabs today, prior to doing the actual wet chemistry to remove the photoresist, sometimes people do a dry process. And we'll talk later in this course on dry etching. But there are ways to create in the gas phase reactive species that will also attack the resist and remove it. And an oxygen plasma is often a way that people use to remove photoresist, sometimes prior to doing the wet chemical strip. So after you do this sulfuric peroxide step, there's typically a water rinse, which is shown in here as implicit. And then you dip the wafers in a mixture of water and HF. HF is an etchant that etches oxide, which is all forms of silicon dioxide. So in this step, there's a strong oxidizer. You end up forming a thin chemical oxide on the silicon surface. This strips it off. And then you do a final DI rinse. So at that point, you have all the gross macroscopic photoresist film removed from your wafer. You still have trace impurities, trace organics, and trace metals that have gotten onto the wafer during the process step that you just did prior to this. So in order to remove those trace impurities, we do what's called the RCA clean. There are a lot of different variations on the RCA clean. But this is-- the one I'm outlining here is the most classic RCA clean. It has two steps, typically, major steps, one called silicon clean 1, SC-1, and one called silicon clean 2. The major purpose of SC-1 is to strip organics. It does attack a certain amount of metals, and it also-- to remove particles. So it's good at etching off particles. And SC-1 itself consists of a solution of 511. So this 511 represents the concentration ratio of water to hydrogen peroxide and ammonium hydroxide. And it's usually heated to this temperature range, about 80 degrees C for about 10 minutes. After that SC-1, after that treatment, then it goes through a deionized water rinse. Again, we talked last time. You have to use very pure chemicals, a water rinse at room temperature and then very often, not always, but very often, right in this step here, in between SC-1 and SC-2, there's an HF dip. Again, the purpose of that is to remove the chemical oxide that you grew in this step to create a fresh silicon surface and then a DI rinse. It wasn't indicated in your text. So I don't-- I put a box for it. But it is very common to have HF dip in the middle there. And then you go to SC-2. This is-- SC-1 was an alkaline solution, like a basic solution. SC-2 is low pH. It's acidic. And it consists of 611 or something in that ratio, water peroxide to hydrogen chloride. So it's very strong acid. And the key for SC-2, its main role is to remove any other remaining trace metals that were not effectively removed by SC-1. And there are some metals that don't come off very well in SC-1. And also alkali ions. We'll talk about those. But the mobile ions, sodium, potassium, calcium, are very effectively removed in the SC-2 and not so effectively removed in SC-1. And then there's a DI water rinse. Sometimes there's a final HI dip right at the end, where you remove the Chemox. So you create a pure silicon surface. Sometimes there's not, especially if the next step-- if the wafer is going directly into an oxidation furnace, people leave the surface passivated with a little bit of chemical oxide on it anyway. So again, this is very much a rough estimate of what people do. There are a lot of variations on this theme. Let's move on to slide number 4 and talk a little bit more about the chemistry. There's more details about this in your text. So if you want to read about that, that will be helpful. An important thing we need to do in these solutions is to remove metal atoms from the surface of the wafer. Again, trace metals are very bad. We'll talk more about this. We saw last time they can reduce the lifetime. They can cause all kinds of problems for logic and memory devices. So the way that we remove them from the surface is to convert them into ions so that they're no longer neutral. And these ions end up being soluble in the cleaning solution. So they dissolve into the wet solution. And they stay in the solution instead of on your wafer. So to do this, we need to oxidize the metal atoms. That is, we're going to use the term oxidation as referring to removing electrons from the metal atoms. So this top equation here, shown on this line, shows that the oxidation process of silicon, taking silicon atoms at the surface, combining them with an oxidizer, such as water, and forming SiO2, plus evolving some hydrogen ions and some electrons. So this is an oxidation reaction. It ends up removing electrons. Similarly, this M here, the second equation, M is meant to represent a generic metal. It could be iron. It could be copper, whatever. Just say iron. By an oxidation reaction can have its electrons removed or stripped and form a positive ion, along with some electrons in the solution. And if you read through your text, this is a table I took directly from your text. This is a table of oxidation reduction reactions for various elements, say some metals. It has silicon dioxide itself. Here's iron and chromium. And what's shown here is the standard oxidation potential in volts and the oxidation reduction reaction, the actual chemical reaction, in the third column. So example, the second row here shows the oxidation of silicon to form SiO2. This row here with the iron shows the oxidation of iron to form Fe3 plus, a positive to remove electrons. In general, a stronger oxidant, something that's better at stripping off electrons from an atom, has a more negative oxidation potential. So as you go down in this table, you're getting to more and more negative oxidation potentials. These are very strong oxidants, oxidizers. So H2O2, hydrogen peroxide, is a very strong oxidizer. Ozone, O3, is a very strong oxidizer. So the way to interpret this table is we're going from fairly weak oxidizers to very strong oxidizers as we go to the bottom. In general, if you have a lot of these reactions taking place on the wafer or in the solution at once, the lowest reaction in the table is going to dominate. And it's going to tend to go towards the left. So for example, say I have a solution that contains hydrogen peroxide. It's quite down in the-- far on the table. It's going to tend to be pushed-- it's a strong oxidizer. It's going to tend to be pushed to the left. And that tends to drive all the reactions above it to the right, basically, just as a rule of thumb. So for example, hydrogen peroxide, which is well below iron, tend to take electrons from iron, forming this Fe3 plus ions, and create ions that are soluble in the solution. So it's an effective way of removing-- of ionizing iron and removing it from the wafer. At the same time as this happens, this is happening, the silicon itself will be oxidized by the H2O2, by the peroxide. So we're forming these ions on the wafer, these metallic ions that can then dissolve. And at the same time, we're growing a little bit of SiO2, a very thin layer of this silicon dioxide. And that gets stripped, that silicon dioxide, in the HF strip-- the HF dip, of a next step. So that's some idea of the chemistry. Again, there are-- your book has a reference to chemistry texts if you want to read that. That's involved in some of this cleaning, cleaning technology. Let's go on to page number 5. Now this is an actual-- some actual data I pulled out of-- some laboratory data on slide 5. And what it's showing, it's actually taken from a wafer mapping tool made by a company called KLA-Tencor. It's a picture. It's a face-on picture of a wafer. Up here, it's a six-inch wafer. Here's a flat on top. It didn't reproduce very well. I'm sorry. So this is the flat of the wafer. This is the six-inch diameter edge of the wafer. And this is a machine that takes a laser and scans it across the surface of the wafer. And it looks for spurious reflections. Every time you hit a particle, you no longer have specular reflection. And it notes that. And it says, aha, there must be a particle or what they call a light point defect, LPD. And this is to prevent having to have people sit in the clean room and literally count the particles on a wafer. That would be too tedious and inaccurate. So it's all automated these days. And it even plots out, the computer plots for you, the point on the wafer. These little black points are places where it found light point defects, where it found nonspecular reflection, something that looks like as if there's dirt or some kind of macroscopic little particle on the wafer. And in fact, it not only does that, it actually tells you roughly the size of the particle. So if you look at these bins, there's bin 1 through 8 here. So it bins them for you. And these are in microns. So I apologize. It wasn't written down, if you want to put that on your notes. So bin 1 goes-- is any particle that is between something like 0.2 microns to 0.3 microns. So in that size range, it found 16 particles. In fact, the system only found 21 on the entire wafer. So there aren't very many particles on this wafer. It's pretty clean. 16 of them are in this range, reasonably small, 0.2 to 0.3 microns. I don't know which-- where they are exactly on here. But these 21 points are the particles shown on the right. And it found five of them in the range of 0.3, 0.4. And it found zero in all the other bins. Bin 3 is 0.4 to 0.5 all the way up to, say, bin 8, which is very large, a micron or 1 to 7 microns. So it didn't find any particles that large. So this is a pretty darn clean wafer. This actually happens to be a wafer map from a wafer that's fresh from a box. So this is a prime grade wafer taken right out of the wafer manufacturer's box. It didn't spend more than a few seconds in the clean room before it got inserted into this machine. And it counted what the wafers looked like. So that just gives you an idea of how sophisticated these systems can be these days. And they're getting ones that can go to shorter and shorter wavelengths. So that's a way of mapping particles. Again, particles are only one thing we're trying to remove, though. Remember we talked about, they're just randomly distributed. A particle is a macroscopic object. It has some diameter or size, 0.2 to 0.3 microns. There may be at the atomic scale atoms of iron, atoms of copper distributed across the surface. They're not in particulate form. But they're just atomic level contaminants. Those are the things we need to remove as well. In fact, those are more of a problem than these particles. The particles are fairly localized. And they don't hurt you in-- for the purpose of doing a lot of experiments. Let's look at the next slide, though, slide number 6. Here's another example of a wafer map. This is actually the same wafer. Actually, ironically, I'm just trying to give you a little bit of practical education here. This is after an RCA clean. But it was a non-optimal process. Now, obviously, it was non-optimal because it looks like there's a lot more dots on the wafer now than when it was fresh out of the box. And this can happen if you're in a clean room that's not that clean or if you have water-- particles in your DI water. The last step is a DI water rinse. Or if you put it through a dryer, which heats up the wafer and spins it around. And if that dryer, the spin dryer, is very dirty, it can throw particles at the very end throughout your whole wafer. So just to make sure we understand the RCA clean, there's chemistry behind it. But there also has to be a lot of practical things that have to be done properly. And so as it turns out, now instead of 21 particles, this wafer now has 1,020-- 1,012 or so, quite a few, most of them added in a very small range. You'll notice they're quite small particles. Most of these are below 0.2-- 0.4 microns or so in diameter. They are perhaps the most difficult to remove of all things. At the atomic level, removing copper, iron, sodium, we can do that with the-- the particles have a certain mass to them. And they also tend to get stuck. They're electrostatically attracted because they can be charged. So they're attracted to the wafer. So it's a little bit trickier to get particles off. And people are always working on new particle reduction technologies. This is some other experimental evidence on slide 7, just to show you that the RCA clean does work and that it can work quite effectively. This is not tracking particles. But what it's looking at is how well it can remove certain mobile ions and metal contaminants. And this is a picture of a wafer and a process or a measurement technique called TXRF. And next lecture, we'll talk about total X-ray fluorescence. It's a metrology technique that can scan across the wafer. And in certain areas and one-centimeter spots, it can measure by X-ray fluorescence the atomic density of certain contaminants on the wafer. And these five points, center, top, right, left, and bottom, were taken. And they correspond to these xy coordinates here. So TXRF was performed at five points on the wafer. And these elements on the left column here are the ones that were measured, sulfur through bromine. And there are two different types of wafers that were measured. Here on the right in red, these are wafers that were subject to a fairly dirty process called chemical mechanical polishing. That's actually tends to be part of back end. But it can actually be used in front end in shallow trench isolation, STI, process. CMP uses a slurry of particles and all kinds of various solutions, one of which contains a lot of potassium and perhaps even some calcium. So it's a fairly dirty process. So after CMP, you really have to get the wafer clean. So this was a test on the right in the red, CMP plus a dilute sulfuric clean only. So it didn't do the RCA clean. And you can see, look at the amount of calcium here, reasonably high quantities. These are all measured in units of 10 to the 10th atoms per square centimeter. So this is looking-- if you looked in a square centimeter, so imagine going into a square centimeter and counting the atoms, you'd find 10 to the tenth of them, roughly. Well, actually, in this position, you'd find 17 times 10 to the 10th, or roughly 1.7 times 10 to the 11th per square centimeter, sitting on that surface. So it's a way of quantifying that. So here it's in the tens, twenties to hundreds on the right-hand side in the solution that used sulfuric only. On the other hand, when the sulfuric was followed by an RCA clean, all the calcium went down below the detection limit of the measurement. The measurement technique can't measure below about 2 times 10 to the 10th atoms per square centimeter. If you compare these numbers to what's required in the ITRS roadmap, in fact, I think some of your homework was about what's required in the ITRS roadmap, you know that on the right-hand side, 17, 50, whatever, a hundred and some times 10 to the 10th, that's well above what's allowed to be processed on a wafer for the starting material. So we really need to get this down into this range. And the RCA clean does that here for calcium. Another example is iron. It's not quite as obvious. But on the right-hand side, there's a little bit more iron. It's measured here. Again, it varies from point to point because there may be some particles on this wafer. And it's picking up particles, perhaps spuriously, which contain iron. This particular spot on the wafer looks very dirty. It's got a high count, about 12 times 10 to the 10th. And zinc comes down uniformly. Look at the amount of zinc that's introduced by the CMP process that is not removed by sulfuric acid. It's not until it goes to the RCA clean that you get that down below the background. So the RCA clean can be very effective in removing contaminants, even ones that come from a very dirty process, which is basically like a sandpaper type of process almost, the CMP slurry. So it's certainly required. So let's go on to page number 8. There are actually-- the RCA clean is very old-fashioned. It's been around for, I don't know, 30 years, a long time. And since its invention, a lot of people have tried to improve upon it. And there are some improvements. There are a lot of advanced cleaning processes that are being developed over the years. And in fact, this is one that's-- I took from your text by a professor named Professor Omhi in Japan, who has proposed this cleaning process. And his goals were to try to get a process that would operate at lower temperature, at room temperature, and that would not remove, perhaps, so much silicon. And it also wouldn't use so many chemicals. This is an amazing thing. But there's a real concern about environmental factors. There's a lot of research today trying to improve the environmental friendliness of silicon IC processing because the chemicals I just mentioned, all of that concentrated sulfuric acid, hydrochloric acid, it's very concentrated. All that has to go down, be diluted somehow and treated before it goes out into our wastewater stream. So people are trying to minimize the amount of these chemicals that are dumped into the environment by inventing new cleaning methods that are more environmentally friendly. So he feels this is more friendly. But it has a lot of the similar principles. Notice the first step is to put it in ozonized water, so water that has ozone in it. So if we go back a couple of slides, let's see, if I go back to slide number 4 here, look at ozone, O3. It's the strongest oxidizer on the list, so even stronger than peroxide. So he creates ozone and puts it in the water. And a very strong oxidizer to strip organics that photoresist and even some metals. The next step-- again, and these are all at room temperature, which is nice. So you don't have to heat things up because, when you boil acids, they tend to vaporize and they go up into the atmosphere. And that also creates a lot of atmospheric pollution. So not having to boil is nice. Then he has a little bit of HF, peroxide, and water in a surfactant. Surfactant helps remove particles from the surface. And he puts it in a megasonic. This is a very high-frequency vibrational technique that actually tries to vibrate the particles off. So he puts a soap-like substance, which is a surfactant, a very high purity, to remove particles and more metals. He then goes back to this strong-- oxidizes the ozonized water to strip any chemicals that adhere to the wafer and then does an HF dip to get rid of the chemical oxide, produce a passive surface. So there's a lot of variations on the RCA theme. I've included here one reference from 1999 that I-- from Mark Hines, I think is a nice reference, it's a little old now, but on alternate cleaning processes and the efficiency of cleaning processes with respect to how well they clean the surface prior to gate oxidation. Gate oxidation is one of the most sensitive high-temperature processes you have to do. We don't want to incorporate any mobile ions, or iron, or anything in our gate oxide. So he's got some-- it's an interesting article if you want to know more about details about surface cleaning. So that's basically cleaning. Let's go now-- now I'm going to talk a little bit, the last topic, on contamination control, which is gettering. But before we go to gettering, I've just been indicating certain elements are bad or whatever, have properties. Let's say a few words about the elements, which we call deep levels in silicon. And this is a chart that I took from Simon Sze's famous book on the physics of semiconductor devices. I took it right out of his chart-- his book. And what it shows is the silicon bandgap here is being-- at the top, the conduction band, the bottom of this-- straight line at the bottom, the valence band. And the dashed line is the mid-gap point or the gap center. And what he has is a series of elements that, when you put them into silicon, what kind of states do they create in the bandgap? There are some shallow levels here. For instance, if we put boron in silicon, we all know that's a p-type doping. It's very shallow. It lies very close to the valence band. And it can accept electrons, and create holes, and dope your p-type. So boron is not too much of a problem. But the things that I have put a box around here, for instance, zinc and gold, have deep levels very close to the middle of the band gap here right near the dashed line. Copper as well has a mid-gap level. Iron has a level right next to mid-gap. It turns out when you have this deep level, this state, this electronic state that's close to mid-gap, it creates a generation and-- it creates a center where a lot of generation and recombination can happen. So it can trap carriers and reduce carrier lifetime. So the worst place, the worst types of elements then, the bad actors, are the ones that have deep levels right near mid-gap. And you see there are some of the things that people commonly talk about in clean rooms as you need to avoid. For example, copper, right here, it's got a number of deep levels, one at mid-gap, iron, zinc, and gold. So they're very-- deep levels are very undesirable. You should take this, though, summary chart with a grain of salt. It gives you a rough idea. Don't take it too literally. It's representative, however. If you want to look at it a different way, you move to the next slide. And I took this chart. This is actually a colored version of the periodic chart that I took from your textbook. And it characterizes the elements according to their position in the periodic table. And here in column 4 is where silicon is, of course. That's our semiconductor. The shallow acceptors, which make the semiconductor p-type which we need to add in, are typically boron. They're shown here in column 3. The shallow donors are in column 5. Of course, they have one extra valence electron compared to silicon. So those are fine elements. The problems are here. The problem elements, as you notice, tend to be in the transition metals. These form these deep levels, these Shockley-Read-Hall recombination centers. And a lot of them are in the transition elements. The other problems are shown over here on the left, the alkali ions, lithium, sodium, potassium, as well as calcium is also included in that category. These are a problem not so much because they create deep, deep levels in the silicon, but because they impact the quality, the electrical quality, of the gate oxide, its ability to act as a good insulator. They can cause shifts in the threshold voltage, their presence. They tend to be mobile in oxide under bias. So that's a problem. So we certainly want to get rid of these. So the idea here in gettering, which is a process we're going to talk now, is we're going to take all these unwanted elements, the transition metals, some of the heavy metals, the alkali ions. And we want to collect them in a region of the chip. Either we want to avoid introducing them, which we'll try to do, or if we can't totally, we're going to collect them in the region of the chip where they won't be harmful. In order to do gettering, and gettering is most effective for the transition metals, we need to take the elements wherever they are during the process and make them mobile. So they have to get away from their position near the device and be attracted to another part of the wafer. So they have to then diffuse. So they have to be made mobile. They have to then mobilize and diffuse to the trapping site. And then they have to stick there. So there's three processes that you have to do in order to getter impurities. So let's go on to slide number 11. And this is a schematic illustration and discussion of the methods of gettering for alkali ions. And this is a cross-section, meant to be a cross-section of your wafer starting here at the bottom in red, the back side of the wafer. And up at top is the front side of the wafer. There are some devices here shown. These little regions are supposed to be p-n junctions, perhaps. And in pink at the very top is what's called-- someone has put on the wafer a layer of PSG, also called phosphosilicate glass. It's actually an SiO2 layer, which it has a lot of phosphorous in it, up to 5%. So it's an alloy. This is an old-fashioned technique. People used to use it because alkali ions tend to be trapped in that glass. It's not so much in use anymore because it tends to absorb water later on in the process. And if you are using aluminum metallization, it can cause corrosion. So the approach to alkali ions these days as opposed to trying to gettering them is a little different. People just try not to get them in during the process. And then once the chip is fabbed, they don't want them to get in after fabrication. They put a protective layer. So the alternative of putting PSG in here is to put silicon nitride. It's a relatively tough layer. It doesn't scratch very well. And it protects the chip from alkali ion contamination after processing. So once the chip is finished, there's a scratch mask put down of silicon nitride. And then alkali ions that would come on the chip surface later after fabrication can't get through it. Now, so this isn't really so much gettering anymore. It's more like preventing it from happening. And this assumes, of course, that the processing steps are free of alkali ion contamination. That's why it's so important in processing, if we go back to this periodic chart, that we try to keep people out of the lab because all of us are full of sodium and potassium. Without sodium and potassium, we'd all die immediately. Our hearts would stop beating. So we're just exuding sodium, potassium, calcium, our bones, calcium, everything. So people are one of the worst, worst things in the lab. And that's why you would never touch a wafer or do anything where anything from your body can get onto the wafer because we have a lot of alkali ions in us. So trace metals are a little different. There are some pretty effective ways of gettering trace metals. In fact, there are two methods. One, they're both shown here schematically on slide 12. The first one is called extrinsic gettering. What does extrinsic mean? Well, extra. You externally put something-- do something to the wafer. Usually, on the back side, you often put a layer of something, which is created to trap ions or trap metal ions down here. So it's usually extrinsic to the wafer. That is, it's extra. It's been added on. The second way is to do what they call intrinsic gettering. And that's to, in the center of the wafer, in the middle region here that's shaded, is to intentionally form a region full of oxygen precipitates. And these precipitates create little traps where the metal atoms can diffuse to and get stuck. And they're within the bulk of the wafer. Generally, for most ICs, the bulk of the wafer is not part of the active region. It's just a handle just holding the thing. So it's OK to have the metals there. Now, of course, if you're making a power device, where the currents flow through from the top of the wafer to the bottom of the wafer, this would be a problem. But for most microprocessor memory, this would be effective. Intrinsic gettering is more popular these days than extrinsic. For these reasons, it's a little bit-- has a little better control. The gettering region here is actually closer to the devices. It can be within 10 or 20 microns. So it's not so hard to get the bad elements out of the devices and into the gettering region. And the other thing about it is that these SiO2 precipitates tend to be thermally stable once they're formed. So it's usually works throughout the entire process. Not always the case with the backside gettering. Let's go on to slide number 13 and think about, what are we talking about? We're taking elements away from-- impurities out of the device regions during the processing. And we somehow have to get them to diffuse to these trap sites. So the first thing we need to know is, how rapidly at a given temperature are these elements going to diffuse? If they don't diffuse fast enough, there's no way you're ever going to get them to the traps. So there's some good news and some bad news. The good news is that most metals diffuse as interstitials, so in the interstitial spaces in silicon very rapidly, which means you have a chance of grabbing them, of getting them to go from the device region to the back of the wafer or into the bulk of the wafer. And in fact, here's some numbers for you quantitatively on this plot. The left axis shows the diffusivity. And we measure that in units of a length squared per second. We're going to talk about the process or per unit time. We'll talk about the process of diffusion later in the course. But the bigger this number, let's put it that way, the faster the elements will move at a given temperature throughout the wafer. And this is what's called an Arrhenius plot. So it's plotted versus 1,000 or 1 over temperature on the bottom axis. If you want to read temperatures in actual units of centigrade, you can read it up at the top axis from here, from 800, all the way up to 1,200. And the different curves are for different elements. And all the metals here are indicated in red. So here's some common metals, things that we mentioned as being very bad for lifetime killers. Here's copper. Look at copper, almost irrespective of the temperature in this range. Copper is zipping through the wafer faster than any of these others. Here's gold, Au, as an interstitial form. It diffuses quite rapidly. Gold here, when it's substitutional on the lattice, so the gold atom ends up being substitutional, doesn't diffuse quite so fast. It's orders of magnitudes down. Looking here at these, these are the diffusion coefficients in blue here for the shallow impurities, like arsenic, boron, phosphorus. They diffuse much slower. And this dark, I guess, stippled region that's colored in tan is roughly the region where people believe the silicon interstitial diffuses. Again, no one's been able to actually measure exactly because you can't see a silicon interstitial. So you can't profile it, how rapidly it diffuses. But there's indirect ways of inferring it. So somewhere in this band, this brown band, is where the silicon interstitial diffusivity lies. It's not bad. It's not as fast as the fastest metals. It's not as fast as copper or sub-- or interstitial gold or iron. But it's in the range of some of the metals. Just to give you an example here, this bottom bullet shows, let's say take copper at 900 degrees. You can read off this chart that diffusivity, D, is about 10 to the minus 4 centimeters squared per second. We'll learn that a rough estimate of how far a profile diffuses can be the square root of D times the time. So if I put this wafer that accidentally has some copper by accident on the surface, in the furnace at 900 degrees in one minute, the square root of dt is 780 microns. And the wafer is only typically 600 microns thick. So basically, that copper can go right through the wafer and be anywhere in the wafer. That's bad because, if you accidentally get copper on the back of the wafer, let's say where you say, oh, it won't hurt anything, you put it in the furnace, well, it sure will. It'll be in the front real quickly. It's good if you're trying to getter because, if the copper happens to be in your device region, you can try to get it to the back of the wafer. And the key is to hold it there. So it's both good and bad. But metals are very fast diffusers. Well, and that's-- copper is an extreme example. So let's go on to slide 14, which talks about metal gettering a little bit more. And this plot on the vertical axis is temperature. And the bottom, the horizontal axis, is atoms per cubic centimeter. It represents the solubilities of the fast-diffusing metal as a function of temperature. So if you look at any point on this curve, if you have a concentration higher than that at that temperature, then the metal will tend to precipitate out, will not be in solution. So metals preferentially like to reside in sites on the silicon lattice where there's imperfections. So this is key. We can use this, this idea. Why might this be? Well, the metals don't fit because of their atomic size different from silicon. They don't fit into the silicon lattice. The other thing is that if you have a fault or a disordered region in the crystal, it might be able to accommodate this size difference and trap the metal atom there. And it turns out that dislocations and stacking faults, these are crystal imperfections that can exist in the wafer, sometimes, they are well known as trapping sites for-- or sites that where copper, gold, and iron tend to accumulate and precipitate. So if you have dislocations, depending on where they are, that could be a good thing. You could try to create dislocations. And these, they tend to be decorated, so to speak, with these copper, gold, and iron. So the trick, the name of the game, is to form these kind of defects intentionally, not in your device region, because then you'd have copper, and gold, and iron there, but away from the active region. And so the idea of metal gettering is how to form these types of things. So on slide 16, we talk a little bit about how extrinsic gathering can take place. Usually, in the extrinsic case, they try to form, people try to form, these sites on the back side of the wafer. And there are a number of ways of doing it, some very crude, grinding and sandpaper. Sometimes you'll look at the back of a wafer, especially an older wafer. And you look at it and there's this ground pattern, almost look like it was put in a lathe. And it goes around, this pattern that goes on the back side. That was intentionally ground into the back of the wafer to make a lot of these gettering sites. They used to use sandpaper abrasion. That's not so popular anymore. People these days use the cleaner processes or the cleaner methods. Ion implantation can be used to damage the back of the wafer, a deposition of a polycrystalline film. It's not that unusual to find polysilicon on the back of the wafer or a region very highly doped with phosphorus on the back of the wafer. These are all regions where metals, if they diffuse back there, will get trapped and stick. So the whole idea is, let's make extended defects that are stable throughout the high-temperature processing. The problem is if they tend-- these extended defects tend to heal, the metals will then be released and they'll go back to the front of the wafer. So that's extrinsic gettering. So if we go on to page or slide number 16, we have a schematic of this intrinsic gathering, which is a little more common. Remember the last time we talked, we said that oxygen is present in all Czochralski silicon. It comes from the crucible. There's nothing we can do about it. And it can be introduced in this supersaturated form. That is, oxygen can be dissolved in there because the wafer was cooled at a relatively rapid weight-- rate. But there's more oxygen there than the crystal wants to hold at that temperature in equilibrium. So if you were to heat that crystal up and give it a long enough time, the oxygen could then tend to form-- precipitate out and form SiO2. So oxygen forms these SiO2 precipitates in the crystal. And the interesting thing is that these precipitates are usually accompanied by some kind of defect, mechanical defect, like a stacking fault or an extended defect, like dislocations. And that's because of the volume mismatch. SiO2 wants to expand. These little SiO2 precipitates want to expand. They compress the silicon lattice around. If they compress it enough, it can pop out these extended defects. So the key point of gettering is to get this precipitation to occur in the bulk, in the bulk of the crystal, throughout the bulk of the wafer, but not in the near surface region. So how are we going to do that? Well, have to have a near surface region that doesn't have high oxygen so it can't precipitate. So what we do is we create something called a denuded zone, which is low in oxygen. And there's a couple of methods of doing it. This denuded zone, which is pictured schematically here, typically might be 10 to 20 microns thick. It sits on top of the wafer, the wafer, again, being full of oxygen. So we can do that-- there's a process we can use called epitaxial growth, which we're going to talk about later in this term, where we can grow material on top that's very low in oxygen. Or if you don't want to do epitaxial growth, because it's fairly expensive, you can just take the wafer. And the very first step you can do is go to a very high temperature and cause the oxygen to outdiffuse, out the surface, out the front side of the wafer. So whatever oxygen was in the top 10 microns can go out. Oxygen buried deeper below that at this high temperature can't make it because it can't diffuse-- it doesn't have enough time to diffuse that amount of distance. So one of the first steps people use for doing intrinsic gettering is sometimes a high-temperature step to create this denuded zone or to grow an epitaxial silicon layer. So let's go to the next slide, slide 17. And the next couple of slides are going to outline the thermal processing. So just by doing these thermal processing steps, you can do intrinsic gettering. And this plot shows schematically on the vertical axis the wafer temperature and time. Again, time isn't really indicated here quantitatively, just to give you a rough idea. But the first step is to take the wafer at a very high temperature, say 1,100 degrees, and do the outdiffusion step. So we're going to take that wafer, put it in the furnace, and let the oxygen go out, not all of it, but whatever is in the near surface region. And this expression in the middle of the slide on slide 17 shows the diffusivity as a function of temperature. It gives you a rough idea. And the nice thing is you don't have to remove all the oxygen. You just have to get the level below a certain level. So for instance, you need to get it from about 20 part per million down to less than 10. If it gets low enough, the precipitation won't take place because, again, it'll be below the solubility limit. So you don't have to worry about precipitation. So you're just trying to reduce it down. And usually, 1,100 to 1,200 is sufficient to create a denuded zone. And that's in the order of 10 to 20 microns deep. It has-- the lower oxygen concentration then will precipitate. So that's the first step is a very high-temperature step. Next step in the process, shown on slide 18, is called nucleation. So here we go from this high temperature, where we got rid of the oxygen in the surface region. And now we want to nucleate the precipitates in the bulk. So we go down to a lower temperature, say 700 degrees. And we want to cause very, very small precipitates to take place. And the optimum temperature for this is somewhere around 700. And this is discussed in chapter 3 of your text. It's optimum because you need these nuclei to grow to a minimum critical size to prevent them from shrinking later when you start to ramp up the wafer temperature. So the critical size is about two nanometers, say, one to three nanometers in diameter, this precipitate of SiO2. And you need to-- you typically shoot for a density about 10 to the 11th per cubic centimeter. So there aren't too many of these. But you definitely have a density of them. So we create these nuclei. And then we actually do-- we grow them to be a little bigger so they become stable. So once you've created this nucleation step, you then take it to higher temperature, say 900 degrees. And you grow these nuclei to make them a little bit larger. So you're not creating more. You're just making them larger. And you typically want a size, a minimum size, say, in the range of 50 to 100 nanometers. You have to be careful how you ramp this up so you don't cause them to all shrink. The basic idea is then you've enabled yourself to create these SiO2 precipitates in the bulk of the wafer. So we go on to the next slide, slide 20. This, I took from your text. These are some actual cross-section scanning electron micrographs. And you're actually looking at the cross-section of a full wafer here in each picture. Each picture is a full wafer. So this top here at the very top is the top of the wafer. This bottom surface here at the bottom of one little rectangular region is the bottom of the wafer. And you'll notice, what you see are all these little black dots, are these SiO2 precipitates. And there's a bunch of snapshots here shown, starting from the top. And there are two different directions. If you go down vertically, we're reading the nucleation time. So remember, the nucleation time is the time that we nucleate these. And so if you-- starting at the top here in the upper left and moving down the first column, we're increasing the pre-anneal time. And what you're seeing is-- what you're doing is you're getting a higher and higher density of these precipitates as I move from the upper left down to the lower left. And then as I move right on this chart, I'm increasing the growth time. That was called the precipitation time or the amount of time we're growing these. Now these are grown at about 1,000 degrees. And you can see them growing here. And so by the time I get to this upper right, we have a certain density of these. But in general, if I move to longer nucleation time, so down here, and longer growth times, so I'm moving to the right, I get the highest density and a reasonable size of these precipitates in the wafer bulk. And you can see all of them here in the lower-- the bottom right corner. So these are precipitates. And look at the very surface of the wafer, the top 10 microns. You don't see any. That's the denuded zone. And that denuded zone was formed, of course, at the top and bottom. Denuded zone was created by doing an initial 1,000 degree C 10-hour anneal before any of this pre-annealing. So during that initial annealing, the oxygen diffused out at the top and bottom of the wafer for about 10 microns distance. And then it didn't diffuse out in here. And that's why when you go through this processing step, you can create these precipitates. And these are the precipitates that we want there so that metal ions can then diffuse from the devices and get stuck. And then keep them away from your devices. So that's the pictorial schematic of the process. So then let's go on to slide number 21 and now talk about way people model this process. And there are typically three steps people need to model. The first step here, shown number one, happens to be-- I'm showing a specific case for the gettering of gold because gold is a very, very common element. It needs to be gettering. And it has some unique properties. Step number 1, we need to mobilize the atom. So remember we talked about gold when it was substitutional. It diffused very slowly. If it's interstitial, it diffuses fast. So to make it mobilized, I need to somehow get it off substitutional sites and make it interstitial. The second step is shown schematically, step number 2 here, which is the gold has to actually diffuse away from the device region down to the trapping site, wherever it might be. And the third is, of course, the trapping mechanism, number 3. So people have tried to model these different aspects, the making mobile, the diffusion, the trapping, with mixed success. The models are actually not universally accepted. There is no really good program you can sit there and run that describes gettering, and very quantitatively, in the way that you can for oxidation that we'll see in this course. But they give you a physical mechanistic picture of how the getting operates. But they're not very quantitative. So let's look at those three steps. The first step was to make the atoms mobile. And remember, gold and a number of atoms can actually exist either on the lattice or in interstitial form. The diffusion is-- the diffusivity is much higher when it's interstitial. So to make the mobile really means to get them off the lattice into the interstitial spaces where they can diffuse. So then these-- just to show you some examples here of common metals and the different types of solubilities they have in interstitial or substitutional form. So copper and nickel have higher solubilities in interstitial form. So they tend to exist already interstitial. So mobilizing them is easy. They're already in the interstitial space. Gold and platinum actually have higher solubilities in substitutional form. But they can diffuse very rapidly once they become interstitial. So these have to be mobilized. Titanium and moly, molybdenum, are actually issue-- problems. They're primarily substitutional, but they have relatively slow diffusion rates, whether they're substitutional or interstitial. So Ti and moly are going to be hard to getter because their diffusion rates are not very large. So to get them all the way to the back of the wafer is going to take a long time. So it's just not very effective. So to getter Ti and moly in these intermediate or slow-diffusing metals, we prefer to use intrinsic gettering. So instead of going 500 microns, all it has to getter through is the denuded zone. All it has to diffuse through is about 10 microns. So that's another reason why intrinsic gettering is more popular. You don't have to diffuse as far. So let's go on to slide 23. And we'll just do an example of how people try to model the gettering of gold and silicon. Or this also applies to platinum. So the first thing, we said that gold can react with a silicon interstitial. And this Si sub i is meant to represent a silicon interstitial and form an interstitial gold. So an interstitial silicon atom can come along, take the gold which is on the silicon lattice, and knock it off, and take its place. And that's called the kickout mechanism. So that's one way of getting the gold off of lattice site because there are other mechanisms. An interstitial can react with a vacancy and form substitutional gold. So basically, the idea, if you look at this first reaction, this first equation, any process that creates excess interstitials should be helpful in gettering. If I have a lot of excess silicon interstitials around whenever there's a gold, it can just knock it off the lattice and get it moving. So process that-- any process that creates excess interstitials should be helpful in this gettering, this mobilization process. Processes that create excess vacancies will tend to hinder gettering because they're-- if you have a lot of excess vacancies, it's going to drive this back to the left, the second reaction, and cause gold to go back to substitutional. So interestingly, just from an empirical point of view, a lot of the things people do for gettering happen to also be known to create excess interstitials. So for example, backside phosphorus diffusion, when we talk about phosphorus diffusion, we'll see how it tends to inject a lot of excess interstitials. Ion implantation, in which we shoot ions into the crystal, tend to knock silicon off lattice sites and create a lot of interstitials. And internal gathering or intrinsic gettering involves oxidizing silicon, which we will find out. Creating SiO2 precipitates tends to punch out a lot of excess silicon interstitials. So qualitatively can we say that the things that we know work for gettering also tend to create a lot of interstitials. And that would be consistent with this type of model, the fact that some of the metals need to be able to be mobilized. So then go on to slide 24. And the second step is, once its mobile, is to get the metal to diffuse to the gettered site in this interstitial form. And so it has to diffuse to the back side or into the intrinsic region. Let's say we're doing diffusion to the back side. This plot on slide number 24 is a plot of the concentration of that gold. It has a function of depth in the wafer, where the 0,0 point, or this point right here, is supposed to represent the wafer backside. So that's the backside of the wafer. As you move to the right, you're going through the thickness of the wafer. So you're going into the center of the wafer. And I move all the way to the window, I'd be at the front side of the wafer. So imagine the wafer is standing up sideways. And this is a plot of different time contours of what the gold concentration would look like. Initially, at time t equals 0, it's a flat line. It's just at 10 to the 15. So let's say, somehow, the wafer got 10 to the 15th per cubic atoms of gold in it uniformly distributed. Now you heat it up and you increase the amount of time. And as I heat it, basically, the gold is diffusing out of the wafer. It's lowering here in the center. And at the edge, it's diffusing down. So this is what we expect the gold profiles to look like over time, just in a rough sense. If we go on to slide 25, actually, there are some plots which-- there's a plot which shows, as a function of time, at 1,000 degrees, what happens to the gold concentration profiles, actually, experimentally. And this represents the observed profiles. They're actually not quite like what was shown in slide 24. If I go back here, you see this is a much more-- in slide 24, it's a much more gradual drop off on this plot. In slide 25, in fact, it's very abrupt. It's very sudden than predicted by this simple model. Well, it turns out that people explain the sudden drop off by the fact that the silicon interstitial diffusion is fast. But it's actually slower than the metal atom diffusion. So the rate-limiting process in this step is for the silicon interstitials to diffuse in the back or inside into the wafer and mobilize the gold. So as soon as the silicon interstitials arrive at a particular location, the gold is converted to interstitial form. And then it's got-- it's diffusing so fast, it can rapidly diffuse out the wafer. So what's rate limiting here is the indiffusion of the silicon interstitials. So this sharp drop off point here, you see at 12 seconds-- 12 minutes, it's dropping off here, 20, in 25, 30 minutes. That sharp drop point corresponds to the depth people believe which the silicon interstitials must have diffused in a given time. In fact, in the old days, people used this as a marker for how to measure or estimate silicon interstitial diffusion. They couldn't see the interstitials, but they hypothesized they were having this effect on the gold. So they would model the gold diffusion and the silicon interstitial in diffusion that's associated with that. And so that was one method of estimating silicon in diffusion So anyway, that's just a practical issue with the way gold diffuses through wafers. And in fact, on slide 26, this is just to remind you of how the relationship or the relative magnitude of the gold diffusion, look at the gold interstitial diffusivity up here, compared to the silicon-- the diffusion of silicon self-interstitials. If you have an element like titanium, which diffuses slower than silicon interstitials, it's going to have a more classical type of profile, as shown here on the lower right, for its diffusion profile of titanium outdiffusing or diffusing to the back of the wafer. So we've made the mobile. We got them diffused to the site. The final thing is trap them. It doesn't do any good to make them diffuse there and then have them come right back out again. So they have to be trapped. And this tends to be very, very empirical. But people have experimentally observed that certain types of backside damage trap metal atoms. If you measure the concentration in that backside region after you've damaged it, it has a higher concentration of metals in it. It's hard to model this. But one approach is to model it using the mathematics of segregation and to talk about the gold as it exists in the bulk and the gold as it exists in the trap site and compare those two concentrations. Remember, in chapter 3, we talked about segregation, described the doping behavior as it segregated between a liquid phase and the solid phase during Czochralski growth. Well, the same concept here really applies here. We have the segregation of these gold or this metal atom between the silicon bulk and the trapped region, or the backside trapped region. So we can use similar mathematics. And that's what's shown on slide 28. This is a relatively simple way of looking at a segregation model for metal atom trapping during gettering. And what we do is we write down the solubility of gold in two states. On the upper equation is the solubility of gold in bulk silicon. So this is not in the gettered region, just existing in the silicon. And it has some exponential dependence on an activation energy, Ea1 over kT. And the second equation down is the solubility of gold in the gettered site, G. So it's got this G next to it in the gettered region. And here Ng, N sub g, is the density of gathering sites. N sub silicon here is the density of silicon atomic sites. And what we define as the segregation coefficient, just as we did before in liquid solid segregation, K0, as to be the ratio of the concentration of the total gold in the wafer divided by that in-- that's in the silicon. So that's our segregation coefficient. And we can just simply ratio-- plug-in these exponentials and ratio them, as shown in the equation. So if Ea1, the activation energy a1, minus Ea2 is a positive quantity, then K0 is going to decrease as temperature increases. So this tells us something about where the metal wants to reside. In fact, empirically for phosphorus backside gettering, it's been found that the segregation coefficient can be written like this, a simple equation, where it has a positive activation energy. So what this says is we need to increase the number of gettering sites, Ng, which, of course, just means keep the amount of phosphorus in the backside region high. And we keep the temperature low, trap it there with a relatively low temperature. So we don't want to go too high to get the highest amount of segregation. So again, it's fairly qualitative. But it gives some estimate of what-- of how gettering works as a function of temperature trapping. That's one example of a model. The second example is shown on page 29. And this actually goes back to work that Shockley and Mole did very long time ago back in the 1960s. And this work was to understand the enhanced solubility of metals in heavily-doped silicon. It was observed that certain metals are more soluble. They tend to want to be in heavily-doped silicon. They have a higher concentration in heavily-doped silicon than they would in lightly-doped silicon. And this is a model that helps to explain that. So in substitutional form, we know that gold introduces deep levels in the bandgap and near the mid-gap position. In fact, there's one here, an acceptor level marked here. And there's one here, a donor level, marked there. So this Au minus level is going to be created whenever the gold can capture a free electron. So if there's a lot of free electrons, we can create this Au minus state, so this acceptor level. So I can just write down this simple chemical equation, gold, one gold atom, plus an electron goes to an Au minus ion. Now, if you're familiar with chemistry, you know you can write any reaction. You can write an equilibrium constant associated with that reaction rate that depends only on temperature. So we write this number, K sub equilibrium. It depends only on temperature. It says that the reactants, the concentration of reactants, Au minus, divided by the concentration of the product of the concentration of the-- or this is the products divided by-- I multiply the concentration of the reactants. That has to be a constant. So the right-hand side of the equation, the concentration of Au minus divided by the gold concentration times the electron concentration, has to be a constant. It depends only on temperature. So that's an important equilibrium relationship. If we go on to slide number 30, here just at the upper left-hand side, just repeated that equation. This equation has to hold both when we have intrinsic silicon-- an intrinsic silicon, the electron concentration is just ni-- or when it's extrinsic. In extrinsic, the electron concentration is just n, whatever you've doped it to, the donor density level. So if that's the case, I can write this equation for those two cases. And it has to hold. So basically, I can write down, as it turns out, that the ratio of the gold, Au minus, in N-type material to its concentration in intrinsic material just has to go like n over ni. So then what that means is the solubility of gold as an acceptor is going to be higher in N-type material. So as I raise n over ni, let's say I dope-- I'm at 1,000 degrees. ni is 7 times 10 to the 18th. And I create a heavily doped region, 10 to the 21. Then just looking at n over ni is a factor of 100, roughly. So that means the gold solubility is going to be 100 times higher in this heavily doped 10 to the 21 region compared to the intrinsic region of the wafer. So again, this is looking at it from the electrochemical point of view. And it's a little bit hand-wavey. But it's consistent with the fact that when people do heavy n plus diffusion regions on the back of the wafer, like phosphorus, they can very effectively hold metals because it has a higher solubility, just because of the electrochemistry involved. So on slide 21, it's qualitatively listed as a series of explanations of why n plus silicon is a good getter. It's a good trap, trapping area. We just saw the statistics of the acceptor and donor levels versus doping from Shockley's paper that showed-- that is consistent with that. There's also people that have an ion pairing model, that a large atom like gold, well, gold is very large, it might want to pair with a small atom like phosphorus and form some kind of AuP complex because this minimizes strain. The other issue is that point defect concentrations are much higher in doped silicon than intrinsic. Remember, last time we talked about, as I move the Fermi level around, I can create more interstitials or vacancies. Gold diffuses primarily by an interstitial mechanism. Once it arrives in the getter region, it needs to find a lattice site to become substitutional. So you can imagine an interstitial gold might react with a vacancy in the lattice. And it gets stuck there to make it substitutional because, remember, substitutional gold doesn't diffuse very fast. So in n plus silicon, we saw last time, it has a much higher population of vacancies than in intrinsic silicon. And so that's going to tend to drive this to the right. So once the gold gets-- it diffuses by interstitial mechanism, it gets to the back side to the n plus, there's plenty of vacancies around to give it a lattice site to get stuck onto. So that's another, again, somewhat qualitative explanation. So that's for backside gettering with an n plus region, such as phosphorus, which was the classic method. How about intrinsic gettering? What kind of models do people have for gettering atoms or-- metal atoms near SiO2 precipitates? Well, actually, this tends to be a lot more qualitative. But if you go back to chapter 3, chapter 3 talks about how SiO2 precipitation takes place. And in fact, it involves a net-- volume expansion. The SiO2, when you add-- when you form it, it tends to increase the lattice locally, wherever that is. And it actually compresses the lattice around it. So people have written this-- it looks fairly complex. But each term has a meaning. This equation, which describes the formation internally of silicon reacting with oxygen interstitials. So here's a silicon lattice site here. A certain number of lattice sites are involved, reacting with oxygen interstitial, O sub i, and forming SiO2. It could be at the surface. It could be internally. But you're forming a small SiO2 region, along with some stress that gets involved. Look at this fairly complex equation. But some of the elements of it are identified. Gamma here is the number of interstitials that contribute to the precipitation process per oxygen atom that join into the SiO2 precipitate. So a certain number of silicon-- silicon is a lattice site. A certain number of silicon lattice sites participate in this reaction. Interestingly, look at the process. It consumes vacancies. So vacancies on the left have to be consumed. And it injects on the right interstitials. So the formation of SiO2, and we're going to see this in a lot more detail when we talk about oxidation, planar oxidation, involves the injection of excess interstitials, which is interesting because excess interstitials, we know, are important for mobilizing certain metal atoms. So that could be one reason why SiO2 precipitation is important. We know that the stress term is generated because it compresses the crystal around it. This can cause macroscopic defects. And the metal atoms tend to be located around these stacking faults or dislocations. If you want to do it in a quantitative or semi-quantitative manner, you can use the same model. Same mathematical segregation model was used as was used for previously. And again, this reaction creates interstitials, which will help free up substitutional gold to make it ready, interstitial for diffusion. So on page 20 or slide 23, I just want to summarize gettering. It's still fairly qualitative. I think you can tell. These gettering methods really haven't been changed much in the last 10 to 15 years. There is a better understanding of how it works. But we really still need better models. In the future, intrinsic gettering, this process of creating the denuded zone either by epi or whatever, is going to dominate because the thermal budgets for IC processing will decrease. And trying to diffuse those atoms all the way to the back of the wafer, especially titanium and moly, which have lower diffusion coefficients, is not very practical. So having the gettering site close is better. Clearly, the amount of oxygen, the oxygen concentration in the wafer, is going to be critical for this whole gettering process. So we're going to need tighter controls so that we can really control the gettering process, and also to control other things like wafer warpage. We talked earlier about how oxygen actually increases the mechanical strength of silicon. So newer techniques are needed to do gettering at lower temperatures. So that's an area for work. And there is really no accurate simulation tool. In this class, for homework, you're going to learn how to use simulation tools called SUPREM-IV to model processes like diffusion, oxidation, whatever. There is no such simple tool for gettering. It's still very hand-waving qualitative arguments, like we've given today. And I'll finish up by summarizing chapter 4. The last couple of lectures, we talked about particle control, wafer cleaning, and gettering processes. These three things, all of these are crucial for successful manufacturing and even for doing the research that you do here in MTL. There's a three-term tiered approach. It was to keep the air and the water clean, continuously clean the wafers in these chemicals, and use the RCA clean, and finally, to do gettering. Gettering involves releasing the impurities, diffusing them to the trapping site, and getting them trapped. The good thing is that most metals diffuse relatively fast. That's both good and bad. There are a lot of means. And next time, we're going to talk about the physical, electrical means for characterizing the wafer contamination. And how effective is your gettering? And how contaminated are your wafers? And I showed you some data today without explaining TXRF and things like that. We'll talk next time a little bit about how some of those contamination measurement techniques work. So that's all I have for this lecture. Please come up and hand in your homework in the orange folder here, homework number 1. Thanks. [SIDE CONVERSATIONS] |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 22_Silicides_Device_Contacts_Novel_Gate_Materials.txt | JUDY HOYT: OK, maybe we'll go ahead and get started, at least with the announcements. Yeah, hopefully everybody's recovered from their Thanksgiving feast. And looks like people are sleeping in today. What I'm showing up here is the schedule to orient us. This is lecture 22. We'll talk about silicides, device contacts, and I've added in a new material this year on novel gate materials. And part of this is covered-- the first two topics are covered in chapter 11. And then we have one more lecture, which is on strained silicon and silicon germanium growth and processing. And then next week we have two class periods scheduled. There'll be four speakers in each. And those will be the oral reports and the student reports given on Tuesday and Thursday. And in fact, I have a slightly better version of this schedule, I think, in this Excel file, and this is posted up on the web, by the way. So on December 7th, we have these four speakers. We're going to hear short presentations, about 16 minutes each, from these four students on everything from high k going to diffusion and gallium arsenide. And then on the Thursday of the ninth, we have these four students scheduled. And I just want to remind people, if you're doing an oral report, you're expected to provide handouts, the same type of handout that I use during lecture. And if you need help in making Xerox copies, contact my assistant. Her contact information is published on the web. Make sure you get to her early enough. Don't give her the job the same day, that morning, but maybe the day before would be good. And also, if you're doing a written report they are due on Thursday, December 9 in class. If you have any questions about the final project, just please email me or contact me after class. OK, so that's the bookkeeping and to remind people about the final project. Let's go on and start-- we can start today's lecture. The notes for today's lecture are given in handout 36. And today we want to talk about a couple of devices, a couple of areas-- the formation of silicides. I've referred to silicides a number of times in the course, but we really haven't gotten a chance to talk about what they are. Now we'll find out. How we make contact, how we make the structures and make good, electrical contacts and the device. And as I mentioned, I've added a new module this year on novel gate materials. And we started some of that last time. So far, we've talked about how to get these insulating and doped regions inside the device. We've talked about oxidation, implantation, diffusion, thin film deposition, and how we etch structures. But at this point in the device fabrication of the seam laws flow, we need some way to make a good electrical contact. And I put good electrical contact in quotes because it's somewhat nebulous. But we'll see in this lecture what we mean by good electrical contact-- two doped regions. And so this lecture is kind of the first link to the backend technology course. The contact itself is still generally considered part of the frontend because you're contacting the silicon. And then beyond, that everything is in the 6.773, which is the backend technology. This is just a schematic I've shown a couple of different times to point out to you what I mean by the contact. You're familiar with this by now. We have the silicon source and the silicon heavily doped drain. Here's the gate material. And I need some way of making a contact between this electrical connection here, this metal line, and the silicon itself. And you can see that can consist of several different materials. In this particular picture, these sort of dark regions contacting the silicon are going to be made of silicide. And we'll talk today about how you fabricate those silicides. Let me also-- I want to talk about local interconnect. Sometimes the contacts themselves are also used for something called local interconnect. And I should point out here, this is an example of a local interconnect where I've got a contact region that is being used to interconnect, say, from one layer to the next, or locally from one part of the device to a neighboring device, but not across the whole chip. These are called global interconnects because these wires go potentially across the entire chip up here. But this is a local interconnect. It only extends over a very small distance. In fact, on slide number two of your handout, I've got an example of a local interconnect and what it might consist of. This is just the circuit schematic. If you have taken electrical engineering courses, you know this is a bipolar transistor with a base and the emitter. This little arrow represents the emitter region. And you see in this circuit schematic that the emitter of this transistor is connected by a wire to a resistor, r. Well, this is the circuit schematic. And below is the actual implementation of this circuit in silicon. And in fact, if you recognize that this is an NPN device, and this little n region right here is the emitter-- this is the P-type base as shown here in red, and this is the N-type collector. And what you see here is this emitter region has a metal contact to it. And it is connected locally to this P-type tube region, which is, in fact, a resistor. This is a resistor because the current can flow from this metal contact here through the P-type region-- it has a certain sheet resistance, or a certain resistance-- and out this other end. So this is an example of using a very short metal wire. Might even end up being silicide or something. A very short metal wire locally on the chip to do interconnect from, say, one neighboring device-- in this case a transistor-- to another neighboring device, a resistor. So that's what we mean by local interconnect. Talk a little bit about historically about how contacts were made. And this is shown on slide 3. In the early days, if this was the drain or the source of a transistor, a MOSFET, people simply used aluminum, which was the metal of choice, directly on silicon to make both the contacts and the interconnect material. So it was convenient. You could contact the silicon with aluminum, and then you could run wires along the chip. And that could interconnect the various devices. The advantages of aluminum has a low resistivity. It's the second lowest of all the metal candidates, copper being the lowest. And it has very good adhesion to silicon and to silicon dioxide. It's a very stable metal, and it makes good electrical contact to heavily doped silicon, as long as the silicon is heavily doped. Why does it do that? Well, aluminum tends to reduce any native oxide present on silicon. You know if you take a silicon wafer and you do an HF dip, you'll get rid of any native oxide. But if you let it sit for any period-- a few minutes, an hour or so-- you grow a thin, natural native oxide, SiO2, at room temperature on silicon. May only be about 10 angstroms thick, but it's still there. The nice thing about aluminum is that it tends to reduce or eat up that oxide on the silicon, and it forms a very thin layer of Al2O3. But this layer is quite thin, and the aluminum itself can diffuse right through it. So this enables you to form a very good electrical contact without having an intervening layer of oxide. Some metals you might put down and they won't eat through the silicon native oxide, the SiO2. And then they'll just be sitting there, and you won't get a good electrical contact. Aluminum has this property that eats through that little native oxide so you get a good contact. So it was a convenient and a good material for people to use for many years. Slide 4 just shows you some of the basic properties of interconnect materials just so you get oriented. Here is aluminum on the top. Again, one of the most popular for many years. The resistivity is listed in the second column, somewhere around 2.7 to 3 micro ohm centimeter for the resistivity. Again, the resistivity is a property of the material. And the last column shows the melting point. So it is a low melting point material. The only other metal of consequence that has a lower resistivity is copper, which is shown here, the third one, the third row. That's about 1.7 to 2 micron centimeter. Copper has a much high melting point. Copper has other difficulties associated with its integration. Copper, if it gets into silicon, can be a deep level. So it can cause lifetime problems. Copper oxidizes very readily, even at room temperature. So you need to protect it, if you're all aware of that. And copper is more difficult to deposit than aluminum. But copper is the material of choice for today's interconnect. I think I brought a wafer from Intel about halfway through the course, a 12-inch wafer, and showed it to you. And you noticed it was all pretty copper colored. It's because the modern interconnect material is copper. A few places are still using aluminum. Aluminum was historically what's used. Copper was introduced in research in the mid 80s and in production about 10 years past that. On slide 4, I'm showing some basic physics that you may or may not be familiar with. But you need to understand the basics of it in order to understand what we mean by a good electrical. Contact their contacts can, to a semiconductor, can be sort of divided into roughly two classes. There's an ohmic contact and a Schottky contact. A Schottky contact is shown here. And if you just look at the current, this little schematic shows the current that flows in the device, so through the device, as a function of the voltage that you apply. Schottky contact is a rectifier. It is when you apply a large voltage in one direction, the reverse direction, not much current flows. So that's not a very useful way to make a contact to a device. If you apply a small voltage in the other direction, you get a little current, and then you apply above a certain threshold voltage, and you get a lot of current. It's basically a diode. And the current transport is by this sort of thermionic or thermal emission over this barrier. So this is not considered desirable for making a good contact. An ohmic contact on the other side, or a tunneling contact, occurs when you can actually tunnel right through this barrier. The barrier is so thin that you can actually tunnel right through it. The depletion region is very thin. And if you do make high doping right near the metal, you can reduce this depletion layer width. And it enables this tunneling. And so if you look at the current voltage characteristics-- so current plotted on the y-axis, voltage on the x-axis-- you see it's very symmetrical, and you get a very, very high current for a very small voltage in either direction. So this is considered desirable. This is what we want. We want that ohmic contact. And it tends to happen, particularly with aluminum and most metals, if you make the surface doping very high. A concept that we want to talk about that is shown here on slide 6 is called the specific contact resistivity. And we usually use the Greek letter rho sub C. The definition of it is given here. It's defined as being equivalent to the derivative of the voltage current density characteristic at a metal semiconductor contacts. So it's partial v, the voltage, by partial j, basically. And so we usually assume a structure where the current density is uniform across some contact area. Then you can calculate the contact resistance R in ohms. So R would have the units of ohms from the specific contact resistivity by the following equation. The resistance of a given contact in ohms is just the voltage divided by the current flowing through that contact. And you can calculate it by rho sub C, which typically has units of ohm centimeters squared or ohm micron squared-- so it's got a units of resistance times an area-- and divided by the area of the contact. So if someone tells you, oh I've made a contact. It has a certain specific contact resistivity, 10 to the -7 ohm centimeter squared, then you can design your contact area as you need to to get the right contact resistance in ohms. And basically what this tells you is-- and you notice you it's the total resistance of a contact in ohms is inversely proportional to area. So if I make a smaller contact area for a given specific contact resistivity, you're going to have a higher resistance. And this is a problem if I'm scaling devices. If I'm packing more and more devices on the chip, it means I want to make the total area occupied by every device smaller. That means I need to make the contact area smaller. But if I leave the contact resistivity the same, I don't do anything different about how I make the contact, then as I'm scaling these devices, of course, the area of the contact is going down to get more of these devices on a chip. That means the resistance is going up as devices are scaled. And we'll see some specific numbers on this. This is a problem as we scale, unless we scale roh sub c, unless we do something to make better contacts as we scale devices, just because of this geometric factor. As I make the area the current goes through smaller, the resistance of that contact goes up. So the second type of contact-- again, this is one that's not generally considered desirable, but we need to understand a little bit of the physics of it-- is a Schottky contact. And the energy band diagram for that is shown here, where the metal is on the left and the semiconductor is on the right. And a Schottky contact is governed by thermionic emission. So it's a thermionic process emitting carriers over this barrier. And we know from the Richardson equation we can write thermionic emission current density J as being proportional to the temperature squared, some constant a star. But most importantly, it goes exponentially like the barrier height. So it's e to the minus q phi B over kT where the barrier height is just the band bending. It's this barrier between the Fermi level in the semiconductor and the Fermi level in the metal. So that's going to tend to be a property of the doping in the semiconductor and what kind of metal you put on it. So if we take this equation. this current equation, which we know, the current voltage characteristic is exponential in the applied voltage. And we just use the definition of roh sub C, you can calculate, as shown on the bottom here of slide 7, what the specific contact resistivity should look like for a Schottky barrier. And you see it goes exponentially, like the barrier height. So if we change that barrier height, we can change rho sub C. So what we do in practice is we look at that equation and we try to get into a different regime. Instead of being in the regime where we're dominated by tunneling over this barrier by thermionic emission, rather, excuse me, over the barrier, if we make the barrier distance really narrow, really small, you know from your quantum mechanics classes that you can actually get quantum mechanical tunneling through the barrier. So here on slide 8, what I've shown is a tunneling contact. It's a Schottky contact in the limit of very, very high doping. So we form these-- you see we still have this barrier, phi B. But the distance, the depletion layer thickness in the semiconductor, is very, very small. And that happens when you make the doping high. So in fact, for a tunneling contact, if you go back to your quantum mechanics, you know if you have a certain barrier height, phi B, and a certain barrier thickness XD, that the tunneling current is exponentially dependent on those two quantities. It looks something like this. So how do I make the depletion layer thickness small in the semiconductor when I put a metal contact to it? Well, again, if you're going back to some of your basic electrostatics, XD is inversely proportional to the doping, or the square root of the doping. So all I need to do to make XD small is to pump up this doping very high, the doping in the semiconductor. So when the doping is high enough, XD will become small. And then when XD gets less than about, say, 2 nanometers or so in this equation, you get a very large tunneling current, basically. So what people do in practice is you dope very heavily above, say, above mid 10 of the 19th, you get quantum mechanical tunneling. And that's really what's dominating. So if you put these numbers and in here and you look at the definition of the specific contact resistivity, you get an equation like what's shown at the bottom of page 8 here. And you can see, again, it depends exponentially on the barrier height and inversely exponentially on the doping. So for low contact resistivity, to get this number down, I need a small barrier. So you would choose a particular metal that gives you a relatively small barrier height, phi B. And you need, most importantly, very high doping. So you need to pump up the doping as high as possible. And then you will get an IV characteristic that is essentially ohmic. So here I'm just doing a simple calculation on slide 9. Here's an example if we use the expression for the specific contact resistivity at very high doping levels where we have tunneling dominate. So we can write that expression that I just showed on the prior page saying that rho C is rho C zero, some pre-exponential, times the exponential dependence on phi B and D, and where this exponential multiplier C1 is given by this number, 7 times 10 to the 10th. So now let's say I assume I have a metal semiconductor contact where the barrier height is 0.6 electron volts. And rho C zero, this number, is about 10 to the minus 7 ohm-- that's a little hard to read, but that's ohm centimeter squared for an aluminum and silicon contact. So the question is if I change the doping from 1e19 to 1e20 by a factor of 10, how much does rho C go down? Just to give us an example of how much you benefit yourself. And again, this is an exponential relationship. So you expect a big change. So in this case, the first case, I have a doping of 10 to the 19th. You plug in all these numbers. You put in the square root of 10 of 19 down here and you get a RHO C of about 6 times 10 to the -2 ohm centimeter squared. It's a pretty big number, as it turns out, if you plug in a standard size of a contact on a silicon chip. That's a 10 to the 19. At 10 of a 20, you do the same sort of thing, and you do the math, and now you get about 6 times 10 to the -6 ohm centimeter squared. So that's about a factor of 9,000 lower. So it changed by almost four orders of magnitude, the contact resistivity, just by upping the doping by a factor of 10. She got four orders of magnitude. So that's dramatic difference. So this tells you, you need to have-- if you're going to get reasonable contacts in silicon technology, unless you want huge devices where the whole chip is dominated by contacts, which is ridiculous-- you won't be able to get enough devices on your chip-- you're going to have to get the doping in the source drain regions high, at least 10 to the 20 or higher. This number itself is still considered a little bit on the too-high of a side for a contact resistance. But it's just interesting to look at these numbers. And in fact, I've taken on the next slide figure 11-7 from your text. And you can see that these numbers are in pretty good agreement with what people have measured. Here on slide 10-- I took this from your text that shows the contact resistivity in ohm centimeter squared as a function of doping here. And you can see the doping up on this scale here for two different metal systems. One is the aluminum contact to N-type silicon, and the other is platinum silicide to N-type silicon. And the data is the bullets that people have measured, like these open squares are for aluminum contacting N-type silicon. And the theoretical curve is shown here with the solid line. There you can see there's reasonable agreement. Notice here this is a log-log scale. So we're getting an exponential dependence in the heavily doped regime. So again, as doping is increasing to the left on this x-axis-- so here's a doping of 10 of a 19th. Here's a doping of 10 of the 20. And we see a number of orders of magnitude. Basically force of magnitude drop as they go from 10 to 19 to 10 to the 20. So you say, OK, that's not a big deal. I just keep increasing the doping in my source drain regions over time. The problem is we know that there are electrical solubility limits. We cannot just arbitrarily keep increasing doping. We're trying to find ways to do it. But you know if you do an implant and you do a certain anneal, you get doping up to a certain value, maybe depending on the doping, maybe in the low 20s. Maybe 2 to 4 times 10 to the negative. Beyond that, it's very hard to go much further. So the problem is that rho C is not really scaling because of the doping electrical solubility limit. So the contact passivity doesn't scale as we shrink technology. And this is a major problem. People are looking for new methods, new materials, whatever, some way of getting the doping up, or a new method of making contact. That's kind of a fundamental issue. In fact, here on slide 11, I show that this is really one of the concerns. This is a table that I took, table 71a, from the 2003 international technology roadmap for semiconductors. I've shown this before, but we hadn't gotten to this contact stage. And just to remind you, in the upper right corner, this is what the MOSFET looks like. And these black regions are going to be silicides, either cobalt silicides or nickel silicide. We'll talk more about that. But the point is they're a contact between a metal and a heavily doped semiconductor, which is silicon. And if you look at some of the requirements here on the ITRS roadmap, here we are in, let's say a year ago, 2003. Look at the maximum contact resistive. So this is the rho C. This right here is represented what we call rho C. In 2003, they wanted to have about 2 times 10 to the -7th ohm centimeter square. And you notice it's sort of a light orange color, which means, according to our key, that there is an interim solution on how to get to this is known. But if you go to 2004, you want to lower it according to ITRS about 1.6 times 10 to -7. That's all in yellow, which means their manufacturable solutions are known. They haven't yet been integrated. And when we get to 2008 in the red, to get down to about 0.8 times 10 to -7 ohm centimeter squared, there aren't any known solutions that are manufacturable. So we're coming up against the red brick wall here because the device scaling and the need to make the contact size smaller means that we need to drop rho C at this rate. And it's just nobody knows yet how to make a contact that has 0.8 times 10 to the -7 ohm centimeter squared for the specific resistivity. So it's the contacts that are really hurting us. Look at the sheet resistance you might be concerned about. Well, how about the resistance of the metal itself? Resistance of the electrons flowing through the metal is not a real big problem. When you look at the contact silicide sheet resistance in ohms per square, it needs to be in this range of 6 to 10 ohms per square. That's not a problem. None of that-- that's all in white, which means people know how to do that. The real problem is making these really good low resistivity contacts to the silicon. I just want to show an example of how-- so you get a feel for how the numbers work of how one does a contact resistance calculation. This is for a MOSFET. This is shown on slide 12 of your handouts. Again, this is a cross-section view of a MOSFET. So on the left, you can imagine this region as being the heavily doped source. In the middle shown in orange is the gate. And this is the heavily doped drain. And these black regions are metal. Could be silicide, but there's a metal region. And I'm showing the dimension of the contact, so the region over which the metal is in contact with the silicon, in this dimension is 0.2 microns wide. OK, now into the page, or into the board, we're assuming that it's 1 micron. And in fact, this is a cross-section view of the device. If you were to look down on the device, a top view, this is what you would see. This would be your source contact region. Here's your gate in the orange. And here's the drain. And again, the region over which we have to make the metal contact is assumed to be 0.2 microns in this direction in width and 1 micron in this direction. So the area of this contact clearly, if the current is going to go flow through it, the area is going to be 0.2 times 1 square microns, or 0.2 micron squared. So that's the area. Assuming we go back to ITRS, assuming we are here roughly in 2002 and the contact resistivity is about times 10 to -7 ohm centimeter squared-- OK, so I'm assuming this as a specific contact resistivity-- then can calculate the resistance of just the drain contact itself. The resistance is just that rho C divided by the area of that contact. And you get 100 ohms. So the resistance of the current just to flow through this contact is 100 ohms on the drain side. It's going to be 100 ohms on the source side, as well. So that's 200 ohms right there, just to give you a rough idea. And this is equivalent to the ITRS 2003 requirement. So we're talking about 200 ohms of the total resistance of the device just being due to the contacts because of the size. And that's a reasonably large resistance. So we'll see that the contact resistance actually does dominate. Again, what you could say was, well we'll just make the contact wider, which you can do, but that means your chip ends up being larger. You can't put as many devices on the chip. So there's a fundamental trade-off here, unless we can get this rho C number down. And that's exactly why this number is dropping with time. People want it to drop because they want to be able to continue to scale the area of the device. It's a problem, though. On slide 13, I'm showing you a structure. If you are studying contacts or if you ever make devices, you always want to know, what is my contact resistance? And this is a classic structure to actually measure contact resistance. It's called a Kelvin structure. I took this particular picture out of your textbook, figure 11-35. There's a reference to it in your textbook, a paper that an article that talks about it in much more detail on how it's actually made. On the left-hand side is showing the mask layout in order to make one of these structures. And what you do is you have these dark regions here that are funny shaped are considered to be the N plus region that you want to contact. So that would be called the n plus diffusion, has this particular shape. The dashed lines represent the metal. So that would be the metal level. So this is going to take at least three masks to powder. And these little square regions with the X's going through them are the contacts. So that's the region where one layer contacts the layer above it or below it. So this is sort of a planar view. If you want more of a bird's-eye view of actually how it looks, you can look on the right-hand side. And these L-shaped brackets are made of metal. So there's one here and one here. And basically you perform a four-point kind of measurement. What you're doing is this little region here that is square in the center that has a dimension of L in one dimension by L in the other-- so this is an L by L squared-- the current then comes through on this leg. So the current flows through the diffusion. Then it flows up through the square, the square contact. And then it flows out through probe number two on the metal. So you put an ammeter here. You put an ammeter between probes 2 and 3, and you measure that current that's flowing through that square contact. And then you put a volt meter between probes 1 and 4 and you measure the voltage drop across that face. And then you just divide V divided by I, whatever you measure-- the measured voltage between 1 and 4, divided by the current flowing between 2 and 3. And that is your resistance. And that's going to be equivalent to rho C divided by the area of the contact, L squared. So this is a very common way to do it. You might say, well, why do you go to the effort of having separate probes? Why don't you just measure the current and the voltage on the same probe points? And the reason you do this is because you don't want to have extra contact resistance, say, of your probes going down. And there's always a voltage drop there. So the probes through which you force the current are separate from the probes through which you measure the voltage. So that, therefore, you're only measuring, really, the voltage drop across this the contact face, the square face. So it's a very common structure, the Kelvin structure, you'll find on a lot of test masks and test circuits that people use to measure the contact resistivity. Here's an example shown on slide 11 using a cross-bridge Kelvin structure. And you have a 1 micron by 1 micron opening. You find that you get a current of 10 microamps through the contact, so that's I23, when you measure a voltage drop of about 320 microvolts. What is the specific contact resistivity? So you just divide the voltage drop divided by the current. And that gives you a resistance number. In fact, that's 32 ohms. And by definition, that's equal to rho C divided by the area of the contact. So you can solve for rho C here just by multiplying 32 ohms by the area of the contact. And you get 3.2 times 10 to -7 ohm centimeter squared. It's pretty close to the ITRS requirements. Still a little bit high. It's still about a factor of 2 too high. But it just gives you an idea. Typically you use several different dimensions. So you either use a 1 by 1, a 5 by 5, and a 10 by 10 square. And you make sure you get the same specific contact resistivity number over all of those because sometimes you can have current crowding effects. When I drew it in this picture here on page 13, I said the current is uniformly going through that face. So you can imagine a flow of water uniformly flowing with the same flux all across the face. That may or may not be true, depending on the series resistance of this N plus diffusion and things like that. So you want to use different areas. And for different size areas, you should get the same number. If you don't, then you probably have current crowding effects, and you need a more sophisticated two-dimensional model. So that's a typical way that people would measure their contact resistance. So that sort of introduces the whole idea of scaling and why we want to do this. I want to bring up some other requirements. We said it's good, it's important, to make a low-resistance contact. But there are other requirements besides that. So clearly we need to have a high doping concentration at the interface that's going to allow the electrical tunneling to take place. You need to be-- the interface has to be free of contaminants, as I mentioned. You want to get rid of the native oxide. So you preferably use a metal that will kind of eat away, if there is any little native oxide, that will eat it away. And you clearly don't want residues. You don't want a lot of excess carbon or other things at the interface because that's going to cause an increase in your contact resistance. You don't want nitride or oxide there. That's one thing to get low-contact resistance. The second thing you want is good thermal stability. You don't want the contact structure to change or degrade in some way during the process. After all, there will be subsequent thermal processing. Once you make those contacts, you still have to make all that multi-level metal. And those multi-level metal schemes involve annealing steps or deposition steps that could be in the range of 400 to 500 degrees. So you don't want anything weird happening down in your contact when you heat the thing. And in particular, you're also very concerned about junction leakage. Remember, this electrical metal, this metal, is making electrical contact to a PN junction. This is an N plus P junction. So you want it to be-- ultimately, this N plus P junction to have good junction characteristics and low leakage in the silicon underneath the contact. So here on slide 15, I'm showing an example of a figure I took out of a text where something very bad has happened, which is called spiking of the metal. And you can see what's happened. This is the metal. Aluminum is shown here. And this is my N plus drain or source. And this chip has been heated up. And the aluminum has actually spiked through the junction. It's actually touching now the P-type silicon. So it's really shorted out. It's essentially shorted out this PN junction. It's created a lot of voids. And this is a big problem. And that's because aluminum has a finite solubility for silicon. The silicon actually gets sucked into the aluminum and causes this spiking effect. And this is one reason why, if you have a very deep junction that's many microns deep-- well, it would never spike because it never gets that deep. But now the junctions are very shallow, they're 0.1 micron or less, people never put aluminum directly in contact with silicon. So in the old days, this is the way people made contacts. Today if you do this with any kind of reasonably shallow junction, when you go to heat the thing up to do the final forming gas anneal, you're going to spike the junction and you're going to destroy the device. Here's an example on slide 16 of what happens in some extreme cases of junction spiking. What we're actually looking at-- what I showed you here in the prior slide, this is a cross-section view. You can see the way the aluminum has spiked into the silicon. In fact, if you look in plan view at a tilt angle of about 45 degrees, this is a contact region. And around it is oxide. These are 5 by 5 micron holes. So they're fairly large. These were annealed by rapid thermal processing. And then the aluminum layer was removed just to see what happened. And in fact, this was annealed for 10 minutes at 425. So this was aluminum directly in contact with silicon at 10 minutes at 425. You can see it's created all these voids because there's a solubility of silicon in aluminum. Silicon from the substrate has actually gone up into the aluminum. And it leaves behind voids, which end up being filled by the overlying aluminum. And when you etch the aluminum off, you see all these holes. And if you do RTP at a lower temperature here, 350 for ten seconds, you don't see nearly as many spikes, nearly as many holes. But 350 for ten seconds is really too low for any backend processing. So this is why you never put aluminum directly in contact with silicon unless you're making a really deep junction, like a micron-deep junction, you have some kind of device. But in MOSFETs today, the typical source drain junctions are 0.1 microns. So this is sort of a classic case of junction spiking. There are a couple of different solutions people have come up with over the years. One of them is shown here on slide 17, although it's not perfect. People had the idea of, well, don't use pure aluminum. Use an alloy of aluminum with 1% to 2% silicon. And since we said that the silicon is soluble in the aluminum, if you put silicon in the metal itself, so when you sputter, you don't sputter pure aluminum. You sputter an alloy of aluminum and silicon. Then you think, well, OK, it won't suck up any silicon from the substrate because it's already in equilibrium. It's already got all the silicon it needs, the aluminum eutectic. So it's not a problem. But it's still a bit of an issue is that the silicon itself can actually precipitate out of the aluminum as an example. It's been heated, and then they remove the aluminum film. And you get these little silica precipitates, which can increase the specific contact resistance, especially in N-type silicon. So it's making an aluminum 1% silicon contact to silicon, which was, again, after pure aluminum, people did the aluminum 1% silicon solution. Was a solution maybe in the 1980s or so, but again, it's still not something that people typically do because you're going to get a higher contact resistance. And you can still get spiking. It's not a perfect solution. But that was one solution that people did use at one point. The way people go today is to form what they call a barrier layer. And a barrier layer does exactly what the name says. It forms a barrier between this aluminum and the silicon. The barrier has a very low specific contact resistivity. So it still makes a good contact, but it doesn't allow the aluminum or the silicon to talk to each other. So you cannot get this void formation. You can't get the spiking. So typical barrier layers might be a thin layer of titanium, say 1,000 angstroms of titanium is often used, or use sputter tie tungsten or maybe tie silicide. So in between the heavily doped silicon, you now have an inter layer of this material that forms a barrier. So this prevents the chemical interdiffusion between the silicon and the aluminum. Generally has low stress. You pick a material that has good adhesion so it doesn't peel off. And of course, good electrical conductivity and low contact resistance silicon aluminum. So that's to satisfy all of these things. And these materials, titanium and tungsten, have reasonable contact resistivity. They are still not low enough to meet a lot of the ITRS requirements in the future, but today they're good enough. Oh, this is just-- on slide 19, I took from a different textbook. This is a figure from Mayer and Lau. I had mentioned Mayer--Lau textbook at the very beginning of this course. So you have a reference to it. And this is just an example of in his book how he's saying titanium may work as a sacrificial barrier. May or may not exactly work this way, but it's one potential methodology. You can imagine having a thin layer of titanium with aluminum on top. When you heat the structure, you may get formation of TiAl3 for some time and portion. And then you heat it a little bit longer and all the titanium is consumed. And then here at this point in D, you can start to get the aluminum incursion. So the idea is you put down enough titanium that you're not going to end up with a spiking problem. And typically, if you want to anneal a typical forming gas anneal, 450 or so, 1,000 angstroms of titanium seems to be adequate barrier layer. But again, it depends on your backend thermal budget. Slide 20 is just another example saying that depending on the barrier layer that you choose, you have to be careful. A lot of these materials for barrier layers. Ti-tungsten or Ti-nitride are very often-- they're polycrystalline, basically. And what can happen if you're not careful is the aluminum can actually diffuse through the long grain boundaries and can still make its way to the silicon and end up causing problems. So people often, when they deposit these material, they sometimes sputter them in an ambient that has some impurity. Could be nitrogen, is one of the most common. Sometimes people actually do sputter Ti-tungsten, and then they expose it to air for a period, a few minutes or half an hour. Then they put the aluminum on top. The idea is that these either nitrogen or oxygen impurities are going to end up being very high concentration in these grain boundaries. This is called a stuffed barrier. By stuffing the grain boundaries, it helps prevent the aluminum from diffusing down in and getting to the silicon. So again, we're going to do a heat treatment. You're going to put aluminum on top here. You need to try to prevent the incursion of the aluminum. So how do you do the sputtering of the barrier layer is actually very important. What ambient you use will determine whether it's a good barrier layer or a poor barrier layer and how long it's going to stand up. So a lot of it's quasi empirical, just testing of what works. On this page 21, I'm showing a classic process that was developed a number of years ago called the salicide process. And this is something you need to be familiar with. Salicide stands for-- the sal comes from self-aligned silicide process. And you'll see when we go through it what we mean by self-aligned. So we start here with our naked device ready to be contacted. We don't have any contacts. We just have N plus source and drain. And you have a heavily doped polysilicon layer. We then form our oxide spacers. So you know how to form sidewall spacers now. You deposit a conformal layer of SiO2, say low temperature oxide, and then you etch it back anisotropically. So you end up with these little stringers on the edges, which we call sidewalls. OK, those are critical. So they're going to form sort of a blocking region, which will be critical in the salicide process. We then deposit metal, M, over everything. So you can do some kind of PBD deposition. And here's your titanium layer that goes everywhere, obviously across the whole wafer. And then you do a magic anneal at the right temperature. And this anneal is such that wherever the metal, the titanium, is contacting the silicon, you form a metal silicide, say TiSi2. So it only forms where it's in contact with silicon, which is right here, this dark region, and on top of the gate. So on the source, gate, and drains where it's in contact with silicon-- remember, the polysilicon gate will also react-- you have titanium disilicide. Everywhere else you have still remaining some unreacted metal. So it's still sitting there. Now, a key to the processes that you dip it in some solution-- sometimes it's HF, sometimes it's sulfuric-- that removes unreacted metal but that does not etch the silicide itself. So we've created a material, a titanium di silicide or a metal disilicide, that because of the virtue of its chemical structure, it now stands up to the etch, which will remove the unreacted metal. So the reason it's called self-aligned, you notice I did not have to do any photolithography to pattern this metal. The metal was deposited over the entire chip. It was reacted with the silicon, and then it was just etched off in a blanket etch. So there's no photoresist step here to pattern this metal. That's why it's self-aligned because wherever there's exposed silicon, you will end up with a metal contact, a line to that exposed silicon, without having to do another lithography step. So this was a really great invention. You would save yourself an alignment tolerance. So it's very good alignment. You form a low resistance contact to the source drain and to the gate simultaneously. So this kind of a salicide process was a big breakthrough in CMOS technology, say around the 80s or so. Slide 22, I just took from a different text. It has a slightly different type of drawing. Maybe it's easier to understand. They have a little different notation. It's the exact same process. It's the basic salicide process. You have an N plus source and drain. You form your spacers. You react the metal. It does not react. Hopefully it does not creep up the sidewall, which is a problem. You then selectively remove the unreacted metal form, put your dielectric down everywhere else, and put in your metal contacts. So that's a basic metal contact scheme that people use today. What do people use? Which materials are commonly used? Well, some of them are shown here on slide 23. The first one that was used historically was Ti-silicide. Ti-silicide was very good because it has a low resistivity phase. And we're talking about maybe a 15 micro ohm centimeter for a particular phase that's formed on a particular type of annealing. There's a little difficulty, though. As people make the polysilicon gate length shorter, it's harder and harder to get this particular phase to form. This is called the narrow line effect or the narrow width effect. Usually the narrow line effect. Ti-silicide also has a tendency to agglomerate when it's very thin at higher annealing temperature. So it's not that thermally stable. So for a number of reasons, particularly the narrow line effect, industry moved a number of years ago primarily from Ti-silicide, although some people may still use it, to cobalt bisilicide. It does not have that narrow line effect, but it gives you a slightly higher resistivity. It's a little more sensitive to surface contaminants. The beauty of Ti is, again, the titanium eats its way through things, through oxides, sort of reduce oxides. So cobalt, you have to have a really clean interface. Cobalt has a little less lateral encroachment over the oxide spacer. And I'll show pictures of that. So cobalt was used and is still used in some processes. The latest silicide that people are exploring in research and development-- and at some point will probably be in production-- is nickel silicide. Nickel silicide has the lowest silicon consumption. In order to do this, you need to react and you need to consume some of the silicon. And you don't want to consume very much of that shallow junction. So nickel is good because of that. It can be formed at very low temperatures. And it doesn't have very bad narrow line effect for silicide in the gate. Big problem with nickel is you have to be careful of your thermal budget. And watch out for nickel. Nickel is a very fast diffuser, remember, in silicon. Nickel is also a deep level. So nickel contamination of equipment and of the wafers is an issue. And people need to deal with this. But nickel silicide is becoming more and more prevalent in research and development. I mentioned there was a problem with this encroachment over the spacer. And this is illustrated on slide 24. On the left-hand side, I'm showing the case of a cobalt disilicide formation where it's the metal that diffuses. So you can imagine that you have this metal that's deposited everywhere. And you're going to form a reaction between the metal and the silicon. The question is, how does the reaction occur? Does the metal diffuse in and meet the silicon down at this interface or vice versa? Well, it depends on the particular type of silicide. In the case of cobalt disilicide, the metal diffuses through the silicide into this interface here and then reacts. So you get silicide reaction or formation at the bottom here. So you're less likely to have creep up. In the case of titanium disilicide, in fact, what happens is the silicon is what's the fast diffuser through the Ti-silicide. It diffuses up and meets the metal. So if you do it long enough and if you have a short enough spacer, you can see the disilicide can creep and grow up from the source and drain down from the gate. And eventually you may actually bridge. And then you have a big problem because then your gate is electrically shorted to your source and drain. And then your device is dead. So titanium disilicide has a little more tendency to do this than the cobalt disilicide. This is part of the reason people went to cobalt despite, the fact, it has a slightly higher resistivity. Slide 25 is just kind of an illustration of some of the kinetics of what's happening when you have titanium on silicon that's reacting. Here we have a certain thickness of Ti-silicide that's already formed. People do model this. There are silicide models in SUPREM-IV. Not necessarily perfectly accurate, but what people model is the diffusion of the silicon from the bulk through the titanium disilicide up here to this top interface and form a new layer of titanium disilicide. So this is what might happen in an inert ambient. Now, if you do the anneal in a nitriding ambient, so in an ambient that has nitrogen, at the same time, you're forming Ti-silicide at this interface, you can be reacting. The titanium is fairly reactive. You can be reacting in form Ti-nitrite up here at the top interface. So this would be in a nitrogen ambient, you can get simultaneous formation of Ti-silicide down here and then Ti-nitride on top of the titanium. Titanium is very reactive, so it's not unusual to try to form a Ti-nitride to prevent it from oxidizing. Slide 26 shows you some of the different uses of silicide in silicon technology. And I've actually updated it. I added a fourth one. So silicide are used, as you can see, to strap the poly. And the reason you do that is you're trying to reduce the resistance of the gate. So you form a little silicide during the salicide process. Strap the junctions. Well, what does that mean? Again, you're going to reduce the sheet resistance of this junction by forming a thin layer. It also forms a barrier layer, as we mentioned. It can be used as a local interconnect. Here the silicide has actually been formed above this oxide. So some silicon must have been deposited prior to that. And finally, it can be used as a gate material. And at the end, I've added a few slides that show you that people are actually using silicides as metal gates. This is a table here shown on slide 26 of some of the common silicides and some of their important properties. So here you notice for Ti-silicide there are two phases. There is a phase called the C49 phase. That forms at low temperatures. So if you were anneal the titanium and react it with silicon, say between 500 to 700, say around 600-- you get a resistivity of about 60 to 70 micro ohm centimeter. That's too high for the film resistivity for most applications. So people typically do a low temperature anneal to form the C49 phase. They then etch off all the unreacted titanium. They put it back in the RTA and they pop it up to 800 or 900 to do a high temperature anneal to form the C54 phase, which has a much lower sheet resistance. Nice thing about Ti-silicide, it's reasonably stable, maybe up to about 800, 900 degrees, something like that. It does consume a fair amount of silicon. So it consumes 2.2 or 2.3 or so nanometers of silicon per every nanometer of metal that's consumed. But depending on how thick of a silicide you're trying to form, you may eat into your junction. If you have a very shallow junction, you have to watch out. You don't want to eat in too far. So cobalt disilicide is another example here. You can see it's got a somewhat higher resistivity. This is the temperature at which it's formed. It consumes a little bit less than the case of titanium. The modern silicide that I mentioned here is this nickel silicide, NiSi. It's formed at very low temperatures, 450 or 500. And it has a low consumption, only 1.8 nanometers of silicon consumed per nanometer of metal. One thing about this, though, if you look at the stability temperature-- look at this column here called stable on silicon. Be very careful. People have listed nickel silicide as being stable up to about 650. It may not even be stable that high. Maybe 600 or so. So unlike some of these others, which you can heat up Ti-silicide or platinum, you can heat up to 800 or so. You cannot do that with nickel silicide. So if you're using a nickel silicide process, you're going to be limited to a lower backend temperature. But you can get reasonably low sheet resistance. Slide 27, I just want to show you an example. And we'll go through these calculations at the end of the lecture. But I want to go on and do the gate material work first. But what this is is a cross section of a MOSFET. And we've seen this cross-section a number of times now in our class. We now have enough ammunition that we can actually go sit down and calculate all these resistances or estimate them given a geometry of a particular device. So the resistance that we really, when we scale the channel length, what we're really trying to scale is the channel resistance, R chan. OK, that's fine. We make the channel shorter, we can reduce that resistance. But the problem is, what happens to-- if we don't do something to all these other parasitic resistances, the net resistance of the device is really not going to go down by very much. So there are three resistances we really want to be able to think about. One is this little resistor right here, which is the resistance associated with the source drain extension. It has a certain sheet resistance. It's doped to a certain level. It has a certain junction depth. It's usually very shallow. So this resistance of the source drain extension is one resistance we have to calculate. The second one is the resistance of this region here, R, that-- and generally, this region has been silicided. You notice the source drain extension is under the spacer by definition, so it's not silicided. So the current has to flow strictly through the silicon. So it will have a certain higher resistivity. In this case, this region is blue. It's been silicided. So it's going to have a lower resistivity. It's got a metal on top of it. So that's the second resistor we need to calculate. The third one is the resistance of the contact itself. So it's the contact resistance from the current flowing through the silicon up into the metal. And that would calculate if we know the area and the specific contact resistivity. So all three of these-- the source drain extension resistance, the silicide resistance, and the contact resistance-- all three are going to add up and give us different contributions depending on the geometry. So I want to go through a calculation of this. I'm going to hold off for now because I want to make sure we cover the novel gate material aspects. But we'll come back to this. And you should be able to sit down at this point and calculate all three of these things and see what they look like for modern technology. Let's go on for now to slide-- there's one more-- by the way, I sort of didn't tell you the whole truth. I said there are three resistors, this one, this one, and this one. It's not quite that simple. These back-of-the-envelope calculations are terrific because you can do them in five minutes in class or in 10 minutes in your office or whatever. There is another resistance called the spreading resistance. And that's actually pictured-- it cannot be calculated by hand, but it's actually pictured on page 28 here, just to give you an idea. This first was discussed by Ming and Lynch back in the late 80s. But what it is is you have this channel region that's very, very thin. Typically the channel where the electrons are the holes are traversing across and they're going across the gate length, that's maybe only 30 angstroms thick. So that's where the current is all flowing in the channel. It's a very, very high density of carriers. The current then spreads out as it goes into the source drain extension. This is such an old paper. It's before they had source drain extensions. But you can imagine it then spreads out here into this region over certain distance. And how far it spreads out and exactly how it spreads out depends on the doping and the geometry of the structure. It's a very two-dimensional problem. You can't calculate it by hand. So this that's called the RSP here in this little diagram where he wrote down the different resistances. So our spreading is one that generally has to be computed either by a two-dimensional simulator or you have to get it out of test devices or some kind of test structures. It's not something you can simply calculate by hand. But it is a major contributor-- it can be a major contribution to the total series resistance. So it's something we need to be concerned about. But I just want to point out, so those three resistances we can calculate by hand, that gives you the minimum series resistance of the device. There will always be a little extra, which is spreading resistance, which you have to simulate in a two-dimensional simulator. That's how we make contacts. I just wanted to go on and spend most of the remaining time of the lecture and talk about something else that's come up in the last three or four years, which is related directly to the contacts because it turns out we can also use silicides not only for the contacts, but we can also fully silicide the gate and use it for the gate. And I think I showed you this slide last time, page 29. It was taken directly from the last lecture. I just wanted to remind you that there are some new gate materials that are coming along. This is the more classical older technology, the traditional technology. This is a photo from an Intel device. It's a somewhat older photo now. This was published back in about five years ago in 2000. And what it shows is what we've just exactly been talking about. Here's the polysilicon gate. These are the sidewall spacers. You know how to form them. And this is the silicide in the source. You can see it looks very dark because it's been silicide. And the silicide in the drain. And the silicide that's been formed on the gate. So that was the salicide processed just by the process exactly what we just talked about. That's the classical type of structure. We talked last time, though, that polysilicon can only be doped so high. Maybe you can get it to 10 to the 20 or mid 10 to the 20. But that's not enough carriers to prevent depletion of the polysilicon at very high gate biases. And you get a polysilicon depletion effect. So people are concerned. They want to get rid of poly as the gate material. It doesn't have enough carriers. Polysilicon may only have maybe 5 times 10 to the 20 electrons per cubic centimeter. They would like to replace the poly with a gate, with a metal gate. A metal has 10 to the 23rd. So it's three orders of magnitude higher carriers. So it's not going to get depleted. So they want to remove the semiconductor from the gate. But poly is an extremely easy material to integrate. People know how to etch it. It's not reactive with SiO2. That's not the case for metals. So here's an example. Last time, in fact, I went through a process called the replacement gate process where we showed they actually used poly. They dug it out at the very end of the process. They etched out the poly, and they replaced it with a high-K material like hafnium dioxide and a Ti-nitride gate. So this was an example of something published about six months ago. So people are thinking of replacing poly, but it's got a lot of complex process integration. An easier process integration that people are considering is rather than digging the poly out-- and whenever you etch out the poly, you always expose the very sensitive oxide silicon interface. So that's typically, from an integration point of view, digging out the poly is not that desirable. They said, well, why don't we just take the polysilicon and fully react it with a metal and convert it from polysilicon into a silicide? Put enough metal there, instead of, in this case, they only put a very thin amount of metal, so it only reacts to form maybe several hundred angstroms of silicide. People are saying, all right, we'll put enough metal on top of that, make it really thick metal, so that it reacts-- and it reacted at a high enough temperature for long enough time that the metal reaction takes place throughout the entire poly. And you get a silicide all the way down to the gate interface. And here's an example of nickel silicide, fully silicided gate that was published about six months ago by annealing at 450 degrees and using a thick enough layer of nickel. So not only are silicides being used in the source and drain, but they're also being used fully in the gate. Now, this is different from the salicide process. One reason would be is the gate's usually reasonably tall. The gate may be 1,000 angstroms tall, something like that. So you need to put enough metal down that you can silicide all the way through 1,000 angstroms. Now, on the source and drain, you don't want to be siliciding down 1,000 angstroms because I said the junction depth is about 1,000 angstroms. So if you're going to do a FUSI process for the gate, you actually need to have two different siliciding steps. You need to have a step when you would silicide with a very thin amount of metal with silicide the source and drain. You need to cover them and protect them then, and then open up the gate and use a thick amount of metal to try to fully silicide the gate. So it's a little-- it's still tricky. It's not as simple as using the old fashioned salicide process where the thickness that you silicide on the gate was the same as the thickness you went in the source and drain. It's different from that. But it's not quite as complicated as completely replacing the gate in the etch-out process. In fact, I've got here-- showing here you on slide 30 some fairly recent results from the last couple of years of how people have made nickel silicide FUSI. FUSI is the acronym for fully silicided gate. How they've made this-- and this particular article came out two years ago from AMD. And what they're showing here is a method to form this. And so they do the standard transistor fabrication or flow here. So this is their poly gate. And here they've actually formed a little bit of silicide in the source and drain and a little bit on top of the gate. So they did the standard salicide process at this point. Now they have to do a plan arising step. So to get from here to here, what they had to do is they had to put a metal and maybe some other harder material-- no, sorry. They had to put, yeah, contact material, and then maybe a dielectric over the entire thing, and then CNP it down so it's planarized because you know they're going to get deposition everywhere. So they planarized it. And they expose, then-- in the plane rising process, they expose the polysilicon gate. And they can then put down metal here, and a thick layer of metal, and react it all the way to the interface. So here they have this very thick region where the entire gate has been silicided. And you notice during this last process here on the right, there's no siliciding happening down in the source and drain because that's all been protected during this process. So it's certainly easier than the replacement gate scheme that showed last time. And it also avoids the PVD damage to the gate oxide. If I'm going to etch this gate out and I have to deposit a metal by PVD, your PVD processes can be very energetic. You often use some kind of sputtering where you have ions. And you can cause damage to the gate oxide. This doesn't have that at all because when you deposit the nickel, the nickel is going on up here. And then it diffuses in a siliciding process to get down to the bottom. Here's an example of an electron micrograph of one of those devices. This gate has been fully silicided. So this is all nickel silicide. This very thin white layer is the gate oxide, very thin. And this layer here is the silicon channel. This is an SOI device. So this is a silicon channel. And this is the buried insulator, the buried oxide. And these are sidewall spacer layers. This is just another image here on slide 31, another image of that gate. They've made a very short channel device, only something between 35 and 40 nanometers. It's a very ultra-thin body device, or reasonably thin body device. This is an SOI, it's only 25 nanometers thick. And they have oxide and nitride spacers on either side. The point they were trying to make with this was that the nickel did not diffuse in to the silicon. Of course, it's a little hard to tell from this. The nice thing about it, if you zoom in at this point right here and you zoom in on the gate oxide-- so this is the gate dielectric, which is shown here by this amorphous material. It's about 2.1 nanometers thick. That's their gate dielectric. This is the metal gate now it's nickel silicide. It went down and it stopped at the gate stopped, reacting at the gate dielectric. And this is the silicon channel region. It's reasonably smooth. People were concerned that the nickel might diffuse in and react with the oxide. But according to this particular temperature and time that they did, they get a reasonably smooth interface. Remember, you can care about that because it's going to be the channel. The carriers are going to be flowing right in the silicon underneath this. So you don't want to get any nickel into that channel. This is, on slide 32, that same paper. They also did some Auger analysis. Remember, we talked about Auger spectroscopy as being a means of measuring the composition by going through a device. So this is a vertical scan through the device. So what they're actually doing is going back here, they're the Auger experiment, and they're sputtering right through here. And through the center of the gate, they're looking at what comes off, what they see, what Auger electrons as they sputter. So here we see atomic concentration in percent as a function of depth, essentially. So this is the surface over here. And you can see that the red line is the nickel. The blue is the silicon. This is the thickness of the gate from the surface here to this point here, to the gate oxide. And what you see is there's quite a bit of silicidation. This is mostly nickel silicide. It looks like here there's a little bit of silicon-- or here you have the silicon in the SiO2. And then this is the oxide beneath. So you just get some idea. This looks like it's fairly stoichiometric, almost 1 to 1 silicon to nickel. Maybe a little bit nickel-rich. Again, using the types of techniques we talked about in our characterization lectures. This is a different paper. This is a paper-- that last paper was from IEDM 2002 applied by AMD. IBM the same year also published at the same conference nickel silicide FUSI gates. This time they did them not only on fully depleted SOI. I just showed you a silicon insulator device. They also did it on a device, which is called a FinFET. We haven't had any time, really, to go into these new types of devices, but there are silicon MOSFETs these days that are not being made in a planar structure. There's a device that emits very non-planar called the FinFET, when in fact, people form a fin. And the channel of the device is actually into the page. So it's into the board. And the electrons actually flow along the side walls. Electrons actually flow right along here. And you have a gate on the right side and a gate on the left side. In fact, the gate kind of wraps around. So this whole thing ends up being a channel. And the source and drain are into the board. This is a device that has some advantages, although it's a little tricky to make. It's a double gate device. You end up getting two channels in this thin silicon film. So they may have FinFET and they demonstrated they could make a nickel silicide gate going all the way around that fin. In fact, this is the gate dielectric shown here, this amorphous-looking region with nickel silicide on the outside. Again, using a FUSI process where they put poly everywhere, put down the appropriate amount of nickel, and then reacted it. And the reaction stopped right when it hit the gate dielectric. An interesting thing that was also done in this paper, if you end up wanting to do research in this area, is shown here on slide 34. This is something called gate work function engineering. We hadn't really talked about it, but the work function between the metal-- I think we may have talked about threshold voltage control. The work function, which is a property of the metal material, to a certain extent. And the silicon, that determines the threshold voltage of the transistor, the voltage at which the transistor turns on. So that's a very important property. People have been using N plus poly and P plus poly on N FETs and P FETs for years because it has the right work function. The problem with poly, as we've mentioned, is it doesn't have enough carries. It tends to deplete. So people want to use metals. The problem with metals is, what work function do they have? Well, they have the work function, typically, of the metal, which is a property of the metal. And there are only so many metals in the world. There are only so many silicides in the world. But what IBM has shown in this particular paper is that they can adjust slightly the effective work function of that gate stack, so the effect of work function in between this metal gate and the silicon material. They said they can adjust it to a certain extent by how much they dope the polysilicon prior to the silicide reaction. And this is just a table that I took right out of that paper. For the case of nickel silicide, what they found is, depending exactly on how much doping they put into the gate to begin with-- say if they don't dope it, or if they put in 1e20, 2e20, or 4e20, the effect of work function that they measure with respect to the silicon conduction band is shown here. It can be varied, maybe by about 100 millivolts. Maybe a little more. Something like that. I guess the nickel silicide was 0. So it's a little more, maybe a couple hundred millivolts, it can be varied by whether you put doping in or not and by how much doping. And this is important because you need this flexibility in being able to adjust your threshold voltage. So FUSI gates are also interesting, not just because it's a little easier to integrate the processing. The frontend processing is what we're all familiar with. Everyone knows how to etch a poly gate and make a good poly gate, and you can convert it to silicide. But depending on how you dope it, you may be able to control the PT or adjust the PT a little bit. It's still a very tricky process. The reaction temperature or the thermal stability temperature is reasonably low. So you cannot take these and then take them to a backend process that is too hot. So they're very much a research. It was only published by IBM a couple of years ago. It's certainly not ready necessarily for manufacturing right now. Perhaps in the near future, but just to give you an idea of the types of things that people are concerned about with siliciding. Before I go through the summary, let's take a few minutes, because we have a couple of minutes right now. I just want to go through with you very briefly, if we back up a little bit here onto slide number 27. And you can sit down and do this back-of-the-envelope calculation yourself over the next couple of days. It's not a homework assignment, but I think it's something you should do-- how to calculate these three resistances. And then we can talk about it and go through it next time. But what I want you to calculate is you're given everything you need to calculate the contact resistance. You're given the specific contact resistivity rho C. And you're given the area of the contact. So you know exactly how to calculate that. You just do a simple division. And you can calculate this resistor, the contact resistance. You're given the sheet resistance of the silicide, this blue region. You have its sheet resistance, or it's resistivity, I'm sorry-- 15 times 10 to the -6 ohm centimeter. And you know its thickness. So resistivity divided by thickness, you can get sheet resistance. And then you can figure out how many squares the current has to flow through. So you should be able to calculate this resistance of the silicided regions. And then the little resistor here of the source drain extensions, we are given their resistivity, and again, its thickness. So we should be able to calculate a sheet resistance, and from that figure out-- so you get the ohms square and the number of squares. And you get a resistor. So one, two, and three resistors corresponding to this contact resistance, the resistance of the current flowing through the sheet, and the resistance flowing through the source drain extension. You'll get three numbers, and you just get an idea of the order of magnitude of those numbers and how they compare. So if you have time between now and next lecture, do that. We'll go through it at the beginning of the last lecture next time. But just an interesting comparison between those three numbers. OK, so let me finish up by kind of summarizing on this topic. What we talked about today, we need good device contacts. They have certain requirements-- a low specific contact resistance, rho C. Very good thermal stability so when you heat them up, they don't spike or do anything unusual. In general, you want to get a high doping concentration of a silicon right at the interface. That will tend to lower the contact resistance by inducing quantum mechanical tunneling. However, there is a fundamental limit on how much doping we can activate in silicon. This puts a limit on the contact resistance and on the sheet resistance and on the parasitic resistance. In fact, the contact resistance is really a big problem. If you look in the ITRS, there's a lot of concern about how to lower that. The so-called self-aligned silicide process, or also abbreviated salicide, has been the mainstay of technology for many, many years now. It has a lot of advantages. It's self-aligned. It reduces the sheet resistance of the deep source drain region. It reduces the gate sheet resistance because you put silicide on the gate. And it also provides a local interconnect layer. So it's been a very good workhorse. Silicides can also function as a barrier layer to prevent spiking, potentially. Silicides do consume silicon. That's one problem with them. And as you move the metal closer to the junction depletion region, it's a big problem because you tend to get more leakage. In fact, this is why people use deep source drain regions to allow a thick enough region in between the depletion region and the point where the silicide is formed. So you can get a reasonable silicide with low junction leakage. Because junctions are scaling to be thinner and thinner, especially in fully depleted SOI, people are moving towards nickel silicide because it consumes a lot less silicon per thickness of silicide than, say, Ti-silicide or cobalt silicide. So for fully depleted SOI, nickel silicide has a lot of advantages. And the last thing, which isn't in the summary we just talked about, is that silicides are even being considered to be fully silicide purpose for the gate. And that's something that's in research right now, you may see in production over the next three or four years or so. So that's all I have for this lecture. Take a look at that simple calculation example by hand. And then next time we'll meet for the final lecture. And again, I announced it at the beginning. The people who are going to be speaking in the order, it's also posted on the website. But if you have any questions about your oral report or whatever, please get back to me. OK, thanks. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 15_Transient_Enhanced_Diffusion_TED_Simulation_Examples_TED_Calculations_RSCE_in_detail.txt | Started with a couple of announcements. There are two handouts for today. There are the lecture notes for today, which are this handout 25. And you also have an additional handout 26, which we're going to go through. It's a little calculation example I was hoping we could go through during today's lecture and have you do some calculations. So it's not a homework problem. We're going to do that during the lecture. Let's see. As far as homeworks, I don't have any to pass out quite yet. The TA is still working on those. And she's out today. But I do have the clipboard, which I will have up front here if you want to sign up for your final project. We started circulating that last time where you can-- I'm asking people to put your name down. And even if you don't know your topic, if you can check whether you want to do a written report or an oral presentation to the class, that would be helpful. And then once you know your topic, that would be good to fill that in, as well. All right. So I'll leave that up here for now. And you can come up towards the end and sign up. OK, so today's lecture is actually going to be the final lecture on ion implantation and transit enhanced diffusion. Let me review here on handout 25, the first page-- review what we've talked about so far. We've talked about ion implanted profiles. And we said you can model them very simply as a Gaussian, more accurately as a Pearson Ford or dual Pearson distribution. I'll show you some examples of real implants today. Or they can be simulated by numerical techniques such as Monte Carlo. Last time we talked a lot about the damage modeling. And we introduced something called the plus end model where n is a small number, about 1. And this is the model for residual damage that says that there's roughly n excess silicon interstitials injected per primary iron where n is a small number on the order of 1. Then last time we talked about how these excess interstitials cluster very, very quickly into defects called 311 defects. And later, these defects then dissolve during annealing. And their evaporation rate is really what determines the kinetics of transient enhanced diffusion, determines how long TED lasts, and also determines the magnitude of TED. In fact, we had a clustering slash evaporation model that we talked about last time. And that was used to explain the time and temperature dependence, and to a certain extent, the dose of energy dependence of TED in a rough way. So what I wanted to cover this time is to actually give you some real examples of ion implantation profiles, data, and a little bit of simulations. I want to spend a few minutes-- maybe 10, 15 minutes if we have time in class. I know a lot of people are sleeping in because the Red Sox won last night the World Series. But for those of you who are here, that's what the handout number 26 is all about. We'll calculate together a little simple calculation on transit enhanced diffusion. And then I want to finish the lecture and spend the rest of the time on the effect of TED on devices. Now that we have these models for TED, we can go back and revisit the reverse short channel effect and talk about that in more detail. OK, so let's go on to slide number 2. And what I'm showing here, these are some actual SUPREM core outputs, output files, modeling ion implant profiles, and just to show you a practical example of how you might use SUPREM-IV along with a dual Pearson. A dual Pearson means you take two Pearson four distributions and you add them together. And you can use this in a quasi empirical way to simulate the ion channeling tail. So there are four different plots here, four different panels. Look at the first one marked A. These are all for the same condition in terms. of energy. This is all boron, BF2, implant. And the energy, the primary energy, is 65 KEV. And then what we're looking at in each panel is a different dose. So before we get started with looking at those, let me just make a note. The total atomic mass of a molecule of BF2 is 49. You just add up two fluorines and a boron. You get mass 49. The mass of boron is 11. So as I mentioned last time, people sometimes do a BF2 to get a lower energy or more easily modifies the silicon. For example, look at this little simple calculation. 65 kilovolt BF2 is equivalent to 65 times 11 over 49. So you assume that the energy is partitioned according to the ratio of the masses for a boron implant. So if I do 65 KEV BF2, it's the same as if I implanted the boron atom itself at 15 KEV. So especially in the old days when it was very, very difficult to build ion implanters that could ion implant at low energies, now they have implanters that can get you down to 1, 2, 3, KEV. But the beam current tends to suffer when you go that low. So instead of doing very low energy boron, people would do a higher energy molecule, BF2. So you'll often see BF2 implants. And so now let's go ahead and look at these BF2 implants. And the couple of things being shown here-- the plus signs or the symbols are the actual data. And these data was taken from Tasha's work back in 1989. These are actual sims profiles. And the lines, the smooth lines, are SUPREM simulations. So for example, in part A, or plot number A, we have a dose that's very high of 5e15. And so what that means is with such a high dose of BF2, throughout most of the implant, of the time of the implant, you're really going to be amortized. You're going to be implanting into an amorphous crystalline because it's going to amort-- the BF2 will amorphous the silicon. So this is the profile that you get. Notice it looks pretty much like a standard Pearson 4. And there's this little small exponential tail. But that tail doesn't really become obvious or evident until you get about one, two, about three orders of magnitude down from the peak. And then you start to see the tail. So the way this profile is constructed is what dual Pearson means. It's two Pearson force added together. So there's one Pearson four here, which is with this dashed line that's labeled as the amorphous profile. And then there's a second Pearson four here, which has also, unfortunately, a dashed line, but you can see it has a slightly different slope. And that's labeled the channel profile. Those two Pearson fours are added up with a certain ratio. This ratio here, in this case, is 0.969. And you get the total profile. And you can see the simulation, obviously if you pick the right number for the ratio, the simulation then fits the entire profile, including this region here at the near surface and the amorphous-- and the tail, rather, the channel tail. Let's go to a lower dose, say 1.5e15. Now you do see-- again, we see two Pearsons, this one here and the one that's associated with the channel profile. But now the channel profile is a higher fraction of the total. And you see this exponential tail coming in a little bit more obviously. And if you go down to panel number C or plot number C, the dose is even lower, 5e14. Not that much amorphization going on. So you have a large, very prominent channel tail. So the second Pearson is really dominating. And finally, at a low dose of 2e13, there is no amorphization. So the implant that really dominates in the dual Pearson is primarily that associated with the channel profile. And the ratio is actually 0, so it's just the channel one. So that's how dual Pearson works. You can see it is two Pearson fours added together, one of which takes care of the channel profile, and the other which takes care of the amorphous. And you add them together depending on a certain number called the ratio. So it's quasi empirical. So that was for BF2. Let me just show you some simulations. The different symbols here are not data, so I apologize. These are actually different symbols represent different SUPREM simulations. Just to give you an idea when you do a simulation, depending on which model you choose, you'll get slightly different profiles. And it's up to you to figure out by comparing to data or by looking at the literature which one of these is closest to the truth. For example, just as a comparison, if you look at the open circles here, this profile marked Gaussian-- it's a little bit hard because there's a lot of symbols. But there's an open circle profile, and it's quite symmetric, as it has to be, because it's Gaussian. It has no channel tail. If you look at the open triangles, it's the Pearson four. A single Pearson looks a lot like the Gaussian, not much difference. A little bit skewed. The dual Pearson is these open boxes. And it does a little bit better job, you would think-- well, we don't know. Actually we haven't seen the real data here. It has the channel tail incorporated in it. And look at the Monte Carlo. Now presumably, the Monte Carlo has the most physics built in into the simulation. And it looks a little noisy because, of course, we only follow a certain number of ions. But in fact, the dual Pearson looks reasonably close to the Monte Carlo. The Monte Carlo is attempting here to include a little bit of the ion channeling. So you get different models give you different answers in SUPREM, and you have to figure out which one suits your particular application best. So now what I want to do is I want to take a few minutes here in class if we go on to slide number 4 in the handout. And this is also the first page of your handout number 26 if you want to look at that. They're identical. And what I was hoping you could do is get together as a group. There's not enough people here to really split up into individual groups because everyone, again, sleeping in because Boston won-- is to go through this and make some simple calculations for the next 5 or 10 minutes together on how you would do the simple calculation. Let me just read through it before we start. So we have an engineer wants to form a shallow boron dope source drain for an advanced technology. And the question, the manager is wondering whether to buy a batch furnace or to use a rapid thermal anneal. And the furnace anneal they are considering for this implant would be 800 degrees for one hour. So that's a relatively long time at low temperature. Or should they use a rapid thermal processing machine, or RTA, at 1,050 for one second? And the implant that they want to activate is boron, and it's 20 kilovolts, relatively low energy. 20 kV, and a 5e14 dose. So what we're asked to do here in A is to make a rough estimate using a square root of DT estimate of how far the dopants move during an 800 degree one-hour anneal versus 1,050 for one second. Again, it's not necessarily Gaussian diffusion, but you can calculate the root DT. And this is without any TED effects. And this is pretty simple because I've printed at the bottom of the slide the intrinsic diffusivities under equilibrium conditions. D at 800 is this number. And the diffusivity of boron at 1,050 is this number. So we have those numbers written right there. So part A is pretty easy. Now, part B, what we want to try to do is include the effects of TED. So we want to use the charts that we handed out last time. And in fact, they're in handout number 26. If you didn't bring your old handouts, I've included them again in handout number 26. Use these charts to figure out the expected enhancement of the diffusivity. Remember we said the diffusivity is now the equilibrium, D star, times CI over CI star. That's the effective diffusivity. And how long Ted lasts to calculate the real square root of DT, including TED effects. So these are the two things I'd like you to try to do. And I'm hoping you can work as a team on this. Maybe get together with a couple of folks who are near you. Somebody here hopefully has a calculator if you need one to do this. I'm not so interested in exact numbers. I just want to get an idea of how the answers play out. So why don't you-- I'm going to give you 5 or 10 minutes to get together and work on that. And we'll stop the lecture now, and then we'll come back and we'll see who has an answer that looks reasonable, and we'll talk it through. OK? So we can stop the recording at this point, as well, because it won't be that interesting to record you punching your calculators. 1,300. What can you say about the first 1,300 seconds? How high is the diffusivity during the first 1,300 seconds? It's going to be 7,000 times than it is in the next 1,300 seconds or 2,000 seconds. So who cares, in some sense about, the next 2,000? Now, if this instead of being 3,600 seconds was orders of magnitude more seconds, yeah, then at some point you actually have to take into account the normal diffusion. So you don't completely ignore the actual time. The actual time that the thing is in the furnace is 3,600 seconds. It's just that for on the order of almost a third of that, it's enhanced diffusivity by 7,000. So we can ignore the rest of the time. But you want to keep that in the back of your mind. So then were you able to calculate either a DT enhanced or the square of DT enhanced? Either one. What'd you get for that? AUDIENCE: DT 1.06. Is that the unit? JUDY HOYT: The DT you got-- OK, I ended up somehow with a 3.5 times to the 10 to the -10 centimeters squared. But doesn't mean-- oh, I multiplied this. So if we multiply the ordinary diffusivity times that, I got 7,000 times 3.7 times 10 to the -17. So the actual diffusivity I got was 2.6 times 10 to the -13 centimeters squared per second during the transit, during the time when the diffusivity was enhanced. And then multiply that by 13. Oh, my numbers-- I had 1,333 here. Maybe I was carrying a few more significant digits. AUDIENCE: Well the numbers should be in the root. JUDY HOYT: Oh, root DT. OK. So for the square root you got what? 18? Right. 10 to -5. So in angstroms, to put it on the same-- that's 1,860, just to put it in the same units. So not even close in the intrinsic diffusion. So really, the TV clearly dominates in that case. And there's quite a bit. That's quite a bit of motion. Now that, you can easily see on a sims profile, and that could definitely affect your device performance junction depth, a difference of that much. So how about a 1,050? Exact same formulation applies. But the nice thing is-- how about what can we say about CI over CI star at 1,050? It's a lot lower, right? I got about 550. So the enhancement of the diffusivity is only 550 times larger. That's the diffusivity-- and how long does it last? And the TED at 1,050-- well, if you go back to your magic curve, phosphorus, it doesn't look like it's going to last very long. Again, you have to multiply by 5 and by 0.08 over 0.06, the RP ratio. So the amount of time it looks like TED lasts under this dose condition, I got about 0.67 seconds. 2/3. 2/3 of a second. Basically, 2/3 of the anneal. Remember, the total anneal time was one second. So again, the last one third of the anneal, I'm going to ignore just because this is still a large enhancement factor. It's 550. So I'll ignore it. If it had been 1,050 for hours, then obviously we can't ignore the normal diffusion. OK. So then in that case, the root DT at 1,050, I ended up with something about 420 angstroms. Is that close to what you guys got? 4.2 times 10 to -6. Again, putting it back all into angstroms. So again, we see this kind of interesting anomalous, and maybe initially somewhat non-intuitive, result. Here we have a higher temperature. Admittedly a much shorter time. It's only a second. We can get a lot less root DT, a lot less motion or broadening, if we do a 1,050 rapid thermal anneal compared to putting in the furnace. You might say, well, 800 is really low. It should be a safe temperature. It's only a couple of angstroms motion. But in fact, you're much better off, for this implant, doing-- according to this calculation of TED, you're much better off doing a 1,050 anneal for a fraction or for a second. So that's just to give you a feel for where the numbers come from and how the whole thing works. I think once you work through an example like that, you have a much better feel for TED. OK. Good. Well, that seemed like everybody had a good handle on that. So let's go back to the regular handout, handout 25. And I think you've got a good feel for how simple calculations and how powerful it is to see you as those two or three charts. With two or three charts, you can say a lot about-- a rough back-of-the-envelope calculation for TED. Anything more sophisticated than what we just did in the last 10 minutes, you probably should be using SUPREM at that point, or some simulator, I should say. All right, let's go on. So just to remind you ourselves, where those charts just came from-- if I'm on the slide number 5 now on the handout, they came from this observation that was made at Bell Labs and confirmed by a lot of different workers that the time scale of TED was the same as the time scale of the shrinkage of these now famous 311 defects, and that at low temperatures, the 311s ones hang around a lot longer than at high temperatures. And of course, that's why TED lasts a lot longer at low temperatures. OK, so just to remind ourselves and from the little hand calculation we just did, this is exactly representative of what we just did in class here. The general picture of TED that we showed last time, we have a certain enhancement factor-- CI over CI star max. Again, that's a ballpark. That's the maximum enhancement. So we assume it's just a constant enhancement throughout the entire period of the steady state while the 311s ones are decaying. And then we suddenly say there's a rapid exponential decay right after some period of time called tau enhanced that we just calculated in our example. So that's the simple model. The critical parameters that determine TED? Well, the amount of TED we just said was the supersaturation level, which we write as I over I star or CI over CI star. And we know that's a function of temperature. And this is the functional dependence, or you can read it right off that plot. And the duration of the steady-state condition, which is the so-called tau enhanced time-- now, that tau enhanced depends linearly on dose. And doses have a wide range. So this is the key. It depends linearly on q. So if you implant 20 times or 100 times more dose, you get 20 or 100 times more interstitials. So it'll last that much longer. So notice the dose unintuitively somewhat. The dose doesn't determine CI over CI star. The dose determines how long the thing lasts, how long the transient lasts. And there is a certain small dependence on RP. And the energy dependence comes from the dependent-- in the equation, the energy dependence is represented through the dependency on RP. So that's the general picture. And I think that example helps us understand it. Now, let's look-- anything a little more complicated, I said we should be using a simulator. So the next couple of slides here on slide 7, starting with slide 7, I'm going to show you some SUPREM-IV examples using more and more sophisticated models. So the first model, I'm going to use SUPREM-IV using a Gaussian implant assumption, which of course is not very sophisticated. And I'm going to be annealing this arsenic implant at 1,000 degrees C for different times. And if you look at the open boxes, that's my initial ion implanted arsenic. And what's being shown here is a series of different curves, different symbols for different times. And what the model that was used in these particular simulations is called the Fermi model in SUPREM. When you do your next homework, there'll be different models. You have to invoke the one here is called Fermi. As the name suggests, it takes into account the Fermi level, and therefore the concentration dependence of the diffusivity. It does not take into account any TED. So when you tell it to use PD method equals Fermi, you're not taking into account TED. So this is just normal ordinary diffusion. And the junction depth proceeds according to a square root DT type of behavior, as you would expect. It does take into account the concentration dependence. And you see the box-like profile as a result. Now we go on to a little more sophisticated model. This is that same implant, 34 KEV 4014. But now this is on slide number 8. What we're doing is we're using a Monte Carlo implant model that changes the as implanted slightly. But the reason we use that is because the Monte Carlo model in SUPREM keeps track of the damage because not only does it track the incoming ions, but every time a silicon gets displaced, it can keep track of that. So it can keep track of the damage. And then we're using a model in SUPREM called the fully coupled or the full coupled model. It takes into account these following factors-- the impact of the Fermi level. It also takes into account point defect injection and 311 cluster dissolution. It has a model there for 311s and their kinetics. And it also, because it's fully coupled, defect gradients can drive diffusion. So if there's a two-dimensional situation like in the reverse short channel effect, actually the gradient in the defect can drive a diffusion. But this is what you would see if you do this fully coupled model. Look at-- these first four curves all lie straight on top of each other. 30 seconds to 120 seconds. They're all the same. Now, why is that? Well, I mean, essentially, it's because of TED because whether you're 30 seconds or 120 seconds, there's some portion of tau enhance. There's a certain enhance time. And during that period, CI over CI star is so large because of the amount of 311s that end up being formed that it doesn't matter whether you're doing seconds or 120. The diffusion is dominated by whatever happens during the transient when the 311s ones are evaporating. And then after that, if you only-- if you go to longer times, so say 600 seconds, 1,200, and 1,800 seconds, do you start to see normal diffusion being large enough? Because the time is large enough that normal diffusion now overtakes the TED and you can actually see that. And so this is an example of why people didn't discover TED until they had rapid thermal annealers because in a furnace, you can't anneal for very short. So you could never access anneal times that were this short. And you never would discover that, in fact, TED was going on. So a signature of TED, if you do an experiment on annealing and implant, a very clear signature-- and if you do sims on different times, is you see very short times. They all look the same profile. That's an immediate signature. Aha, there's some kind of TED going on. Some very short transient is dominating all the diffusion. And then you'll go to long enough times that you'll start to see normal diffusion taking place. And these are SUPREM simulations. There's no actual data here the different symbols are not data. I apologize. It's for different-- the symbols are for the different simulations. OK, so that was a couple of examples. Let's go on to slide 9. I want to give-- we're going to go back now. Now that we've talked about TED and we understand the models in much more detail, I want to remind ourselves why we want to limit diffusion and then talk about some particular device impacts of TED. Well, we already know we want to keep junctions shallow. These junctions here, XJ, either in the deep-source drain and particularly in the shallow-source drain extension has to be kept shallow in order to do gate length scaling. We need steeper lateral junction. So in this direction, in the L direction, we need the junctions to be very steep in order to lower the series resistance at a given effective channel length. We need a very small under diffusion under the gate to reduce the overlap capacitance. If this underneath the gate, this diffuses too far under the gate, then you'll have a lot of overlap capacitance. And that's going to slow down the circuit speed. We talked about needing a retrograde well in order to get better mobility while controlling short channel effect. We need this retrograde well in depth. We need to retrograde the profile and decrease it towards the surface. And there's these fancy little angled halo implants. These halos here are shown in the bright red. These are implants that are done at an angle to the gate in order to put the dope just right where we want it. And if we're using this angled implant to obtain the right profile, we'd like to not have that diffuse all over the device because then it kind of-- we don't get the shape that we want. So all of these reasons, we need to limit diffusion. So I want to talk about, now that we know TED and go back to this reverse short channel effect that I introduced three or four lectures ago, and I think we'll have a better understanding at this time. If you want to learn a little bit more about it, there's an article by Rafferty in IED of '93 or Crowder at IBM of '95 where this was first explained. And what people were trying to explain is the device physicists in the early 90s were all finding if they-- and the circuit guys-- if they plotted their VT a function of the gate length of the transistors on a chip or on a wafer. What they expect is the normal short channel effect. VT is fairly constant, and you get to short channels, and the VT is supposed to roll off due to well-known electrostatics. What they found and said is the so-called reverse short channel effect. It's reversed to one's expectations. In fact, the VT, the threshold voltage on these devices, was going up as you made them shorter. And it was going up and peaking quite a bit. And then eventually, the normal short channel effect would take place. So this so-called reverse short channel effect was quite bothersome in the early days because people didn't understand when they scaled to these channel lengths why VT would be going up. How could it be? Well, we talked qualitatively about the reasoning for this several lectures ago. Let me review the qualitative, and then we'll do the more quantitative. This is an actual simulation from SUPREM of what's going on in the reverse short channel effect. So we have our gate. You see the side wall spacers. This little region under here is the source extension. This is the drain extension and the deep source and the deep drain. And what's happened is that we have implanted these source drain regions and their extensions. We've implanted, say, with arsenic. And we've generated a flux. In fact, these arrows are supposed to indicate the interstitial fluxes. So these are silicon interstitial atoms that are diffusing. And in fact, and they're then recombining. They can recombine the bulk or they can recombine in the surface. And because of the way the recombination goes at the surface, they end up creating a flux that goes like this. And now that flux and the gradient of the interstitials is so sharp that it actually can drive uphill diffusion of the boron. And you can see that uphill diffusion. You can't see it in this, but if I take a cut right through the center here at X equals 0 and plot it versus depth, so boron concentration versus depth at the center of the channel. If you have a 1-micron device on the chip it looks like this. If you have a 0.18-micron device on the chip, it looks like the green curve. And what do you see? In the green curve, well, for one thing, the surface concentration of boron is a lot higher than it is in the 1-micron device. So that explains to the circuit designer, oh, I have a higher concentration of boron on the surface. That makes my VT higher. So that explains the VT effect. How it got to be so high, and how did-- look at even the peak of the boron profile in the 0.18 micron device. Even the peak move towards the surface. Again, that's totally non-Fickian diffusion. If I give you a Gaussian diffusion and you diffuse it in your calculator, the peak stays put. It just goes down. Peak doesn't suddenly move over to the left or the right. That's not Gaussian. Well, what's pulling this peak over is the fact that there's these interstitials that have a gradient. And they drag with them because remember, boron likes to diffuse with interstitials. It's diffusing as a pair because of the fully coupled diffusion. They drag with it the boron peak. So this is the qualitative explanation, at least, of what's going on in the reverse short channel effect. And now let's go through a little bit more carefully the articles by Rafferty and Crowder. And you can see what it is that they-- how they explain this in a little more detail. These are some of the key process steps that influence the channel profile. So the reverse short channel effect ends up all up about being what determines the channel profile. So step number one, we know we do an ion implant and it has some shape. This is supposed to represent, if you turn your head sideways, the shape of on implant. So we do an implant and a couple of energies. It peaks here, and then there's another peak down here. That's what we expect it to look like in the as-implanted case. Step number two in making the device. Well, we grow a gate oxide. All right, there's a certain amount of thermal budget associated with the gate oxide-- 800 degrees, whatever. It'll diffuse a little. OK, we can understand that. That's normal diffusion. Step number three, we put down a gate and we pattern it. So it has a pattern like that. Usually polysilicon gates are put down at 600, 500-- very low temperatures. Not a whole lot of motion going on. All right, now the step number four is called the lightly doped drain. Actually, we don't use that terminology anymore in devices. We call it the source drain extension. It's a shallow source drain right here. That gets an ion implanted and it gets masked by the gate. So when you're doing step number four, the sidewall is not there. Step number five hasn't been done yet. So you have this shallow source drain extension. So there you're injecting a certain number of point defects. OK? Now I take that. I've injected these point defects, and now I have to form the sidewall spacer. Depending on what material form the spacer, if I form it of silicon nitride, that goes down at 800 for about an hour. Uh oh. Bad temperature. 800 for about an hour. We just did that calculation. 800 for about an hour, you can get a lot of TED can have a 7,000, perhaps, enhancement in your diffusivity. So you got to watch out for that. That's why nitride spacers are a little tricky. If you do a low-temperature oxide deposit in space-- or you can do that at 400 so it's not so bad. You don't have to worry so much. All right, so there's a possible indication of problem. And now number six, we do the deep source drain implant. Again, that's probably going to be arsenic. Pretty high dose, 10 to the 15. Not a great thing in terms of TED. I'm introducing a lot of point defects that can then come in here and enhance the boron implant, the boron profile. So I just wanted to give you the order of the steps so you can understand how all these things fit together to determine the channel profile. And if you want to go a little more sophisticated into Rafferty's IDM article that's being shown here on slide number 13 of your handout, what he calculated here-- is so this is the edge of the gate. This is a two-dimensional plot out of a two-dimensional simulator. And these are contours. And what he's done is he's ion implanted a certain amount of damage or dose into the drain region. And he's calculated something called the time integral of the supersaturation ratio. So it is essentially-- and he calls it the enhanced time, but it's the integral of CI over CI star, integrated over the anneal. And he's plotted in terms of contours. So these are profiles of damage caused by this shallow source drain implant. And he represents this damage by this integral of CI over CI star, which he calls the enhanced time, so to speak. And a number here that corresponds to 10 to 6 has the units of seconds. So it's as if you did a 10 to the 6 second anneal, essentially. And that corresponds to an average supersaturation of about a factor of 1,000. If the real time-- if this were really-- the real time is about 20 minutes. This cutoff at the bottom of your 20-minute real time is equivalent to 1,200 seconds. So there's huge amount of enhancement. But look at these contours look how they go down. You're going from 10 to the 6 here down to 10 to the 5 in this range. So again, I have a large gradient in this CI over CI star. And the gradient is pointing me, pushing me, towards the gate and towards the gate oxide. Don't forget he's saying that the oxide interface under here acts as a sink for recombination of interstitials. OK, so this is the origin of the reverse short channel effect as he explained it on slide 14. The implant damage from the shallow source drains sets up a retrograde CI over CI star profile under the gate. And the gradient in this profile, so this grad of I over I star-- because again, it's diffusing as a pair-- results in an extra flux in the boron diffusion that wouldn't be there if it were diffusing alone, but because it diffuses as a pair with interstitials. So that retrograde causes a boron pileup at the interface. And the shorter the channel, the higher the pileup because as you make the channel shorter, you bring in those source drains closer and closer into the center of the channel. So that explains the reverse short channel effect. Why does the boron pileup get bigger and bigger, and therefore the VT goes up and up for a shorter device. Oops. And in fact, here's an example,. If we go onto slide 15 from his article, this is boron concentration simulated versus depth. Now, he's doing this at the center of the channel. And say a long channel device looks like this. Here's the boron he simulated. It basically has the as-implanted shape. There two implants. There's a low energy implant-- you can see its peak-- and a slightly higher one. So this boron at the center looks sort of like this. Now, if you take the boron that's the center of a 2-micron long channel, if you go to a 0.45 micron device, it looks like this one. There's a huge pileup at the surface. And the peak is close to the surface. So the surface doping could be three to five times different when you have a very short channel device compared to a long channel device because of the influence of these point defect gradients that get set up. And so these very different surface dopings can then be used to calculate the VT difference between these devices. And that's exactly what Rafferty did. And this is taken, again, from his article, 1993. He calculated here, based on those profiles, what he thought the threshold voltage should be as a function of 1 over the channel length, or if you want to read the channel length on the top axis for different biases. And if you look at his calculations here, this smooth, solid line is, including the TED effect, he predicted that the channel length would go-- the VT would go up like this as I go to shorter channel lengths. And the data-- these are the experimental data they obtained from devices at Bell Labs. Indeed, you can see the VT going up. So look at the VT for a 2-micron device is over here. The threshold voltage is about a volt. And for a 0.45 micron device, which is, I think, right about here, the VT is about 1.3 volts. And that agrees very well. So that roll-up of the VT agreed very well with the simulation when he included the TED. The dashed line is a simulation without the TED. And indeed, you see the normal short channel effect going down like this. So the only way people could really understand the reverse short channel effect was to really understand in detail what was happening with the boron TED and the influence of the damage on each side of the channel on what was going on inside, underneath the gate. So that's kind of a famous paper on how understanding these process models end up influencing the device model, which ends up influencing the circuit model. So there's a lot-- in simulators, there's been a certain amount of work in the early 90s on how to model reverse short channel effect, what are important parameters. Well, obviously, the magnitude of the initial implanted damage. Well, you know the dose of the source and drain extensions and the source drain. So that's important. But you need to figure out exactly how much I over I star there is. We did it in our book, our textbook, using a very simple analytic calculation. We calculate the maximum CI over CI star. Remember, that was a ratio of k reverse to k forward in that equation. And we put some estimates down for that, but it's not clear exactly how accurate. So depending on your simulator, the way they calculate it will be slightly different. So these are different clustering factors in this simulator. They have a parameter called a cluster factor that will adjust, essentially, the CI over CI star max. And depending on how that is, that CI over CI star max, you'll get different amounts of reverse short channel effects. So I apologize here, the vertical axis here is VT. That got cut off somehow. So this is the threshold voltage. And this is the gate length calculated. And you can see in this particular simulator-- this is not SUPREM. This is a silvaco simulator. But for a cluster factor of 0, they didn't predict any reverse short channel effect. When they allow clustering and they allow these 311s to come in, that you do see a roll-up, a reverse short channel effect. And depending on the magnitude of the cluster factor, reverse short channel effect can be more or less prominent. So there will be usually-- depending on the simulator you use, there'll be a couple of parameters one can tweak to effect both the initial implant damage and the recombination rate. This is a two-dimensional effect. So not only how much damage and how many 311s do I end up with, and how many interstitial-- what's the interstitial concentration that's important, but how those fluxes diffuse and how they recombine at that oxide interface will determine the actual gradient of the interstitials. So the k sub s factor is also important, the recombination rate of interstitials at the gate oxide interface in the channel. And that's a parameter that one can adjust. Hopefully it's fairly well-known, but by adjusting that parameter, you can change these curves, as well. Annealing temperature is important, as you know. Again, this is from that silvaco simulator, as well. Again, VT versus gate length. And the different color curves here are for different annealing temperatures. And as you might imagine, low temperatures give you more reverse short channel effect because we know at low temperatures, TED is more prominent. The CI over CI star hangs around longer. The time of the enhancement is worse. So here at 850 in the red, you see more a little bit more of a roll-up than if you were to anneal at very high temperatures. In this particular anneal, I don't know what the dose was. There wasn't much reverse short channel effect. So lower temperatures tend to change that. And that's just because of the CI over CI star term, as well as the tau enhanced term, depend on temperature. And in fact, the next slide in your handout, slide number 19, is just that same plot that we've used to do the example a few minutes ago to remind you that CI over CI star is increasing as we go lower in temperature. And that's why you see more of a reverse short channel effect as you lower the temperature. So not only does the VT change, but just to give you an idea of other device parameters that can change depending on the channel profile, that boron pileup at the surface underneath the gate can decrease the channel mobility. Channel mobility, at very high concentrations of channel dopant, you get more scattering of the electrons, more scattering of the carrier. So the mobility can go down. And in fact, this is a plot of a calculation of the mobility that Rafferty made in that article here on slide 20. So he's calculated the mobility as a function of the channel length for two different doses. This EXT dose-- EXT stands for the source drain extension. So that's the dose that gets implanted right next to the gate. Right after you cut and define the gate poly, you implant the source drain extensions. And here this is for a 3e12, a relatively low dose. Not that many interstitials are injected. The boron profile looks pretty much as implanted, and you don't get much pileup. So the mobility stays pretty high until you get to very short channels. If you do an extension dose here that's about three times that-- say about 8e12-- what happens is you get this reverse short channel effect to get the boron being drawn to the surface. A very high amount of boron in the channel means you have a lot of ionized acceptors. And the electrons feel those ionized acceptors. They get scattered by coulombic scattering. And in fact, the mobility then can go down. So for a higher extension dose, different profile causes pile up and you get a lower mobility. Lower mobility can mean you can end up lowering your current drive compared to what it should have been. And that affects the overall circuit speed. So again, just subtle changes in the channel profile which have nothing to do with how I implanted the channel. It's how I implanted the neighboring regions. The source and drain end up impacting not only the VT, but the mobility of the device, and therefore the current drive. Well, this was a neat paper. Now, several years later, 1995, Scott Crowder came up with an interesting idea. Knowing what he knew about 311s and what we in our class, he said, OK, well, 311s, they anneal out a lot faster at high temperature. So let's say I have to do an 800-degree step because I'm going to have to put down those nitride spacers. Maybe I can still do that if before I do it, I do 1,000-degree short anneal. So let's do a very short anneal, one second at 1,000, evaporate all those 311s, and get rid of them. Then I can put the wafer in the furnace and put down my 800-degree nitride. All the TED will be over with. So I use a high temperature to cause the 311s to evaporate, get rid of all those excess interstitials in a very short time. I should have less of the reverse short channel effect, less total interstitials. And that's what, in fact, we showed. This starred region is when he did the high temperature rapid thermal anneal first before he did the 800-degree C, longer time anneal. And he saw he had less roll-up, less reverse short channel effect. The open squares represent the case where he did the 800-degree C anneal in the furnace first. And then did the 1,000. And you see a lot of reverse short channel effect. Interesting. Exact same amount of time in the furnace and in the RTA in both wafers. He just changed the order of the operations. So this is kind of interesting that what we know about 311 defects and how they anneal, it's important for us to think about the order of the steps in which we make a MOSFET where we insert the anneals. Now, you don't always have a choice. You have to cut the gate before you do the source drain extension so it's self-aligned and all that. But this was kind of a clever experiment just to show that the process order can have a big effect because of things like 311s. And again, this is just a reminder. We already saw this several times on slide 22 of the amount of time. And what he did was this 1,000-degree, 1-second anneal. He popped it up. And so he can get-- basically within a second or so, he can get rid of all the 311s. The tau enhance is only a few seconds. And then he could go on later, and without any 311s around, put it in the furnace, or with very few, and go down to 800. So that was his proposal. Another thing that he showed in that paper-- this is from that same paper by Scott Crowder, IEDM of 1995. He did an interesting comparison. He also compared devices made by various similar processes on bulk, so on regular Czochralski wafers, to the similar device made on an SOI wafer. Remember, we said there's this technology called SOI where you can have single-crystal silicon layer on insulator. Very high quality material. And you can make MOSFETs in that material. And what he did was now he found, when he did the source and drain, implants-- indeed, you of course, even in silicon insulator, you do inject interstitials up here. But interestingly, remember, the interstitials tend to get injected both down, and then they go up. But ones that went down were going up here now against an interface between oxide and silicon. And it turns out that interface is a very good sync. There's a lot of recombination that can take place at this interface. K sub s is fairly large between oxide and single-crystal silicon. So a lot of interstitials can be sunk or can be absorbed there, whereas in the bulk, we get these interstitial fluxes that come in and they don't get absorbed because there's no oxide down there. You can end up getting profiles that look like-- fluxes that look like these arrows, driving the boron to the surface. So what he showed was, in fact, on an SOI wafer, you don't get as much pileup of boron underneath the gate. And on the SOI wafer, this plot-- the y-axis didn't get showed up here, but the y-axis is the threshold voltage, VT. Again, the roll-up of the VT is a lot less on an SOI wafer, subject to the same kind of annealing, compared to this bulk wafer shown by the solid line, which had a lot more reverse short channel effect. Interesting idea. Use SOI. Use the property of the fact that there's a sink here for interstitials to sink out a lot of the interstitials that you implanted and get less motion of the channel doping. So this tells us right away, though, if we're doing a process in SOI-- and we can do the exact same process. We can get quite a different result in insulator in SOI compared to bulk because the channel doping profiles will not be the same. And so the simulators need to be able to simulate these effects. And another thing I want to talk about is I just want to remind you what's the usual order in which we form the channel, the gate, and the source drain in a MOSFET, and we are doing clever things with the order of things. This is sort of a cartoon in PowerPoint. And so this is supposed to be my wafer. These green regions here on the left and right are going to represent the shallow trench isolation. So green is my STI, which is my isolation region. And then typically, after you do shallow trench, you implant what they call SSR, super steep retrograde. It's just, in implant regions, if you're making an MOSFET, you have usually a shallow implant up here, or more lightly doping near the surface. And you have the peak of the implant a little bit deeper. So this could be a boron implant. So that boron implant usually goes in pretty early in the process. And it's that boron implant that's going to determine your channel profile, and therefore your VT and things like that. So it usually goes in fairly early, right about here. After that, it sees the thermal budget of the gate oxide growth. That could be typically 800 degrees. So it's got to go through that diffusion. And then you make the gate. The gate is usually a very low-temperature process, and it's just etching. So that's no motion of the boron there. Then you implant the shallow source drains and you use the gate as a mask. So now I'm doing implants of arsenic. And you're introducing a little bit of a certain number of point defects here on the left and the right of the channel. Now I put it in the furnace and put on spacers, these green spacers. If they're oxide, I do it at a low temperature. LTO goes down at 400. It's not usually a problem. If they're silicon nitride spacers, typically nitride LPCVD goes down at 800. So I could get a fair amount of diffusion at 800 of TED, especially because I have the implant damage introduced from the shallow source and drain. So watch out for nitride spacers. 800 for an hour to make these spacers could really-- as we saw in our example, can really cause a lot of motion at that boron. Then we do the deep source drains using the spacers as the mask, now, again, self-reliant. And then usually right after that, there's a final thermal anneal. So one idea people have had is, well, don't put the P type SSR implant in at the beginning. Why don't you put it at the end of the process? And a very radical idea is to put it in even after the gate has already been formed. This is not being done in production, but it was a neat idea that people had in research. OK, you have to deal with all these point defects. Well, don't put the boron in until you've already annealed out a lot of those point defects. So here is an alternate process to give you an idea of how device engineers were trying to get around TED, to a certain extent, on slide 25, and also process integration scheme for forming a MOSFET. And this was published back in 1998 by Philips Corporation called channel profile engineering, 0.1 micron MOSFETs, by doing through-the-gate ion implantation. So they were proposing a flow that goes like this. A conventional flow is on the right. Conventional usual sequence to make a MOSFET. As you form the P well, you implant the channel boron ion profile all the way up here. Then like we just said, we do gate ox. We form the gate. We make spacers. We form the source drain extensions. Oh, I'm sorry, that's some oxide deposition. Here's the sidewall spacer. Here's one of the killers, the nitride dip at 800. A lot of TED can happen there. Then the deep source drains. And then we do the RTA. So what they were saying is instead of putting the channel implants in here, where they can diffuse during this nitride spacer step, take the channel implant and put it in towards the end after you already have the nitride spacers in. The thing that's weird about that, though-- think about it. Now I have a topography that looks like this. Now I'm going to implant the channel. I have to make sure I give it enough energy so boron can get through the gate. So you have to calculate that energy. And in fact, your born profile now is going to look sort of like this. It's going to have this shape to it. It's going to be a little deeper here where the gate doesn't exist in the source and drain. It'll be a little shallower here. That might be OK from a electrical point of view, but it is kind of strange. But the advantage they have is that it doesn't go in-- the only thermal step it sees is the last high-temperature step. It never sees all the TED would happen during the sidewall spacer at 800. So that was an idea they had. One thing you might think, though-- what do you think about-- if you're an electrical engineer, what do you think about ion planting through a gate oxide? It's a little scary because that gate oxide might only be 20 angstroms thick, right? Are you going to implant a high energy ion through? It's not clear what damage takes place right at the interface between the oxide and the silicon, and interface states and things. So although this was a neat process, I don't think it was ever accepted in production. I'm not sure people thought it was reliable enough. But they did show-- on the next page, they did show in their IDM article that they could achieve a dramatic improvement over the boron control. This is dopant boron concentration versus depth. And the black line is the reference device. So that's the device that went through the ordinary flow where they put the boron in at the start of the process. And it goes through everything. It sees the sidewall spacer, nitride depth, all that. Lots of TED. The boron is essentially almost flat. You don't even see much of the implant, the initial implant, whereas when they did through-the-gate as implanted and after processing-- so look at after processing here-- you can see there wasn't that much diffusion at all. So they were really able to control much better because it didn't see any of the TED the nitride. All it saw was the last high-temperature, 1,000-degree step. But as I say, for reliability reasons, I don't know if it was ever really accepted. They did show that they could reduce the reverse short channel effect. This is now on the slide 27, VT as a function of gate length. And they have different things here. Here's a reference process, the black. That's when you put the boron in at the very beginning, the standard process flow. You see the roll-up, the reverse short channel effect. And through the gate is TGI. And they had several different doses. TGI, you see no roll-up in VT. The VT is very flat until it goes to the conventional short channel effect sort of effects. So they were able to eliminate the roll-up because they had better control over the profile. They essentially eliminated a lot of the TED in the boron profile. I just want to mention before we finish chapter 8 some new diffusion modeling issues that are in the literature right now that are people working on today, conditions where we have very high dose being implanted. The 311 model that we talked about is good in intermediate dose regime, but doesn't really work for very high doses. So we need to model the type of damage that takes place in a very high doses. It's being investigated by people today. Very wide energy range. There are people doing implants less than a kilovolt today. There are some really crazy people trying to do implants very shallow, and some very deep ones, even greater than a megavolt. The physics of stopping and the physics of how to model those implants is not really all that well-known in these two ranges. We talked about pre-amorphization the substrate. Prior to introducing boron we said, well, hit it with silicon at a high enough dose that you can amorphize it, and then you can avoid channeling. But what kind of damage is produced by a pre-amorphization by an amorphous implant and what effect exactly it has is not all that well understood. There's a whole bunch of new annealing techniques that have come out, something called a spike anneal where you take a RTA machine, you zap up the lamps really fast, and you zap them down immediately. And the wafer never even spends time at any one temperature. It just sort of goes up and down. You're accessing sub-1 second time regime. Exactly what happens during those ramps is not completely understood in spike annealing. Laser annealing, where we take a laser and we scan it across and we only heat the area for a nanosecond. Again, the kinetics of defect evolution in the nanosecond regime has not really been very well understood. So that's a very hot topic these days. So process conditions are changing from what they used to be? New mechanisms. Well, people already know 311s. That's kind of been beaten to death, I would say, at this point. A lot of work has been done. But end of range dislocation loops, which we never completely get rid of. And these are very important. If you do an amortizing implant, you can't avoid them. Their effect on diffusion has never been completely understood. The clustering of dopants with interstitials, which may restore a defect and may affect the electrical solubility of the dopant. How many electrons or holes you end up getting has not really been understood. So that's a new topic. And at interfaces, what the recombination rates are at interfaces. As we put more and different materials, we're putting high K into the problem now. What happens when you have a high K interface? How do point defects recombine at an interface between a high K and silicon as opposed to SiO2? A lot of interesting new research topics that's not covered. And I won't go through the summary in any great detail. I think we've gone through all this. I encourage you to read through chapter 8 carefully. And I think we're at a stage now-- next time I'm going to talk about SUPREM-IV in detail-- where we have enough tools that we can really understand to a reasonably high level how people put processes together to make devices and to minimize doping diffusion. OK, that's it. If you haven't signed up yet on the clipboard, I've got it up front. I'd be happy for you to sign up. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_I_KMeans_GMM_non_EM_Expectation_Maximization_I_2022_I_Lecture_12.txt | Hello, so welcome to the next section of CS 229. So what we're going to talk about in the next series of about four lectures is the start of unsupervised and kind of less supervised learning, and I'll make that concrete. And kind of the plan is what you're going to see is you're going to see a bunch of things that look various different ways of dealing with this fundamental problem of what do we do when we don't have labels. So we're first going to look at an algorithm called k-means, and then we're going to look at this algorithm called GMM, this Gaussian Mixture Model. Those will both happen today. And we'll try and put those algorithms on kind of solid footing in the maximum likelihood framework. We'll then see a couple of more algorithms that have to do with this unsupervised way of viewing the world. One of them you're going to use in your homework called ICA. It's actually-- the reason I teach it, it's kind of a fun algorithm. It's a weird bit of setup, but it's probably people's favorite homework problems when we look through the kind of feedback. It's a fun problem. If you remember the cocktail problem from the first day where you have people talking in a party, and you have microphones scattered around the edge, and you want to know who said what. You want to do what's called sometimes source-separation and understand the sources. So we'll do that in the ICA section. And then we'll talk about a more advanced topic called weak supervision. Week supervision is industrially used to create large training sets for big, deep learning models, which you've just come out of that. But these are labels that are lower quality than the traditional supervised labels. And then finally we'll talk about self-supervised learning, which is really exciting and is one of the big revolutions in machine learning, where we train on something simple like predicting the next word. And if we train on enough data, we can use it for all of these other-- sometimes called downstream tasks. So that's kind of the program for the next four or five lectures. And today we're going to start with those basics. So we're going to start with the basics as unsupervised and work our way up to pretty modern stuff. And I will say historically-- I would say five years ago, I kind of didn't like teaching unsupervised because it felt kind of squishy and weird, to be honest with you. It didn't have the beautiful theory-- or maybe even 10 years ago-- didn't have the beautiful theory of supervised learning where you could say this is the right model, and we go find it in a lot of cases. But that's been where almost all of the research activity, not only in my group but kind of from AI, over the last three or four years has been in this unsupervised realm. So we'll start with some of the classics and get our way to something really, really exciting. So the classic and probably an algorithm you already know or probably you can guess kind of how it works is this algorithm called k-means. And then we'll get to these other ones, but this is the one we're going to start with first. So what's the difference? So in the supervised setting, we get points. And the key issue is that the points come with labels. You have labeled these plus and these minus, as we were doing. And we would draw a line or a boundary in between them. That was the way we formalized a wide variety of learning tasks. And the point is those pluses and minuses, those are labels. And unsupervised started in this setting. I give you the points, but I don't tell you any labels underneath the covers. Now there are many things on the spectrum between I give you the labels perfectly to I don't show you any information about the labels, and I just show you the data points. And we're going to talk about that spectrum, but now we're going to start in the cleanest setting in unsupervised where we just see those points. Now the thing that you should calibrate here is, what can we expect as an answer? So in the supervised setting, as I said, it was super clear what we wanted. We could talk about what this line is, and we can go deeper and deeper into why that's the right line, right that's the right separating hyperplane. In the unsupervised case, it's necessarily squishy. And we'll talk about the ways in which it's squishy. And what I mean by that is it's harder in some sense. So unsupervised learning is harder. And what I really mean by that-- than supervised-- what I really mean by that is we have to allow stronger assumptions. Stronger assumptions. So one of the things that kind of pass by you without thinking about it too much in the supervised setting is you're like, there's just some data, and there exists a separator. There exists some class. That's pretty weak for what we had to assume about the data. For unsupervised, we're going to have to assume there are some kind of latent or hidden structure. And a lot of the conversation is, hey, if that structure is there, can I provably recover it? That's the kind of thing that we're going to do. And we're going to trade off kind of assuming more about our data and trying to find it robustly. And now compared to supervised, we're also going to accept weaker guarantees. So weaker guarantees. So in supervised learning, and in many cases that we cared about-- weaker guarantees-- we would say, we got exactly the right answer. This was the right model. We could say there were some theta star out there that was just the right thing we were looking at. As we'll see, and I'll come-- I'll show in some examples in a second. In this unsupervised world, it's not super clear what the right answer should be all the time. If I just show you this picture and say, how many clusters are there? Some of you say, well, there's two clusters? Some of you say, well, why aren't there six? There are six points. Why isn't each one in its own cluster? And it we'll see in some examples, It's not really clear what there is. We have to make some assumption or do some modeling to understand and kind of get to the bottom of our data. And so that's what I mean by weaker guarantees and stronger assumptions. And we'll see those mathematically, but just keep them in the back of your head. We're going to be more comfortable saying, what if our data are generated this way? That's going to be a stronger assumption. And then we're going to say, we can recover something that's pretty good some of the time. All right this is less jarring to you than it was historically, honestly because historically we would teach the first part of supervised learning without things like neural nets and all the rest. And neural nets you kind of got comfortable with the fact that, hey, maybe it's computing something useful. Maybe it's not. We don't know. We just run it for a long enough. This used to be a little bit more disturbing. So if you look at the literature, you'll see that in it. So let's start with our first version of this. All right, so what we're going to be doing is we're going to be given some data. It's not very-- it doesn't look very nice. This is our given data-- given. And we have some points. And this is basically me just introducing notation and copying what we had before, all right? We're also, here as I mentioned that stronger structure, we're going to also be given a parameter which says how many clusters we think there are. So the game is going to be-- here I'm going to write down k is equal to 2. And our goal is that what we want to do here is find a good clustering. And so our do is, we want to use two-- I'll use colors here. Apologies. Hopefully it'll be obvious. You don't need to know what the colors are. But we'll kind of find two colors like this. Say like, in what sense is this kind of intuitively good? This is the same thing we're doing in supervised learning. We're like, this is an intuitively good thing. We talked about different loss functions. And you're kind of an old hat at that, so you can kind of imagine what we're going to do. Like, oh, the loss in some way from some object shouldn't be too far away. Now the k-means is hinting at that object is going to be the mean or the center of that cluster. You kind of want to minimize that center cluster. That's not obvious? We'll come back to it. So this is what we're given. We're given data. We're given this k parameter. And our goal is to create some clustering that we're going to talk about. So we need a little bit more notation. Now this guy we'll call at some point x1. We'll call this point, say, x3. And that's just the notation that's there. I'm going to write the notation formally in a second. But before I do that, is it clear kind of what the setup is? You give me some points. You don't tell me any labels. You'll also tell me how many clusters you're looking for. And then it's my job to find them, find those clusters, and we'll talk about how to make that precise and informal in a second. Awesome. So I'll just write down some notation. And if there are any questions, please pop in there. There is one question. Why are you going [INAUDIBLE] I'll show you that notation in just one second. So yeah, let me write the notation down, then I'll come back to what I'm calling there. Great question. Also on the ed thread-- sorry that I forgot to do it earlier-- someone's asking, what about the unknown k case. We'll come back how to select k later. Some of our algorithms, as we'll see just to go ahead, some of our algorithms will actually have a natural test for k, will be able to basically try all the different ks and pick one. Some of them will not. And it will be a modeling decision. And let's come back to that question for how k is unknown. It's not, by the way, it's not super unrealistic that k isn't known. You may know by looking at your data that roughly you think there are about three or four products or something that people are talking about or various different topics, and those are the kinds of situations you will need kind of k means in. But let's come back to it. Great question on the ed thread. All right, so let me give you some proper notation so we can actually talk about this maybe a little bit more mathematically. We won't be super heavy on that today. So what we're given in general is sum x1 to xn that live in some vector space. And here we're given some k which is the number of clusters. Number of clusters. We need one bit of notation to do this. So how am I going to make these colors into notation? Well, I'm going to introduce this thing. We're going to find an assign-- oops-- find an assignment of points to clusters. Of points to clusters. So how do we do that to the k clusters? So what we're going to have is we're going to have a map that's called Ci equals j means point i to cluster j. So here i is going to run from 1 to n. And j is going to be from 1 to k. These are the same. This k and this k are the same. So back to this, I was drawing. I got a little ahead of myself. I started to draw the notation before giving it. This is just saying we can call this point x1, this point x3. I'm just noting down what the points themselves are. And then what our goal is to find this map that I drew by kind of this highlighted color. And that's going to be the Cik character here. So this is-- I'll highlight it in yellow for reasons-- not really sure why I picked yellow. So for example, C 3 equals 2 encodes this fact that C3 is in cluster 2. And I made the green cluster 2. So point 3 in cluster 2. OK, this will sound kind of opaque at the moment. But this is a hard assignment. We're saying every point belongs to exactly one cluster. And the reason I'm signaling that is in the end of the lecture, we'll talk about a soft assignment where we have a different version of this. But that's what this is. Anything that's obtuse about the setup? All right, let's see an algorithm. All right, so let me copy this piece. All right, drag it over there. All right, and get rid of these characters because we don't see them. And this is our input. So the question we want to ask is, how do we find the clusters? Find the clusters. Now there are many ways to do this. And if you're a complexity kind of nerd, then you would say, is there a polynomial time algorithm for this? And the answer is no. Actually it turns out there's an NP hard problem. So we're going to have to use a heuristic-based algorithm to do this. And let's see the most natural kind of iterative approach here. So how does it work? OK, so here's the idea. We're going to start by randomly picking cluster centers. And then what we're going to try and do is the way we're thinking about k-means is we're going to try and find our clusters as two-- that are described by two cluster centers. So intuitively-- let me just jump back-- sorry about this-- jump back. We'll kind of try and find a point, say, here and a point here that are good cluster centers. And they're cluster centers in the sense that all the points in their cluster are closer to the center than they are to any other center. And that's how we're going to construct our Cij. So we'll start by picking these means. The means we-- where are we going to pick? Well, if we knew where to pick them optimally, we'd be done. So we just pick them randomly to start with. So I'll pick the cluster center for 2 up here, and I'll pick the cluster center for 1 down-- oops. I'm going to use a different color for deep-seeded reasons. No, for no reason. I just like different colors. Sound good? So I just picked those points. Maybe I want to move this one just to make my life a little bit easier. And delete that, and drag it over. You say, why are you dragging it over? Because it's like a cooking show, and I want to make it just look a little nicer when I draw the pictures. [INAUDIBLE] list of points that we have, and they can be-- No, were not list of points. These are just two random initialized points in the plane. I just randomly picked them in RD. We'll talk about a smarter way to initialize them in a second. But just for the moment, they're just random points. They are not points. They do not have to be points in that you are given. We'll talk about how to initialize them later. And you may not exactly want to do that, but you want to do something similar. So let's see how the algorithm runs. So the first thing is, what would the clustering be if these were the centroids, these were the center points? Oh, go ahead. Oh, sorry, what is the name of the algorithm? k-means. Sorry, yeah, I wrote it above it. This is called k-means. Great question. Please. How do you-- I guess this is a simple [INAUDIBLE] So for the moment, k is going to be given to us as input. We'll come back, as I talked about, about how to select k in a minute. But let's run through the algorithm once and see how it works. Then we'll talk about what goes wrong and how to extend it. These are wonderful questions, really great questions. Are there others? Awesome. We'll come back to that. And please, both questions, please come back to those, as I said. All right, so the first thing is we have these randomly centered cluster points. So this is our first step of the algorithm is that we're randomly initializing cluster points. Randomly initialize mu Oops, got more than I bargained for. And this one. Paste. So we just randomly initialize where those characters live. Two, we then assign each point to a cluster. And the way we do it is, which point are you closest to? So I think, if my drawing's OK, those are all closer to mu 1. And maybe this one and these couple are closer to mu 2. So you're clear enough what we've done? This is the cluster assignment, and I can write it kind of mathematically in one second. If you'd like, I'll actually I'll vamp a little bit and say how I did this more precisely. Max over j, 1 to k. Oh, sorry, min. Why did I write max? Augmin. C or mu, write this clear. What's going on? Mu k minus xi. And we put a square there just because I like squares because we do it. Not really too problematic. And they're the same distance. So this says Ci, where I'm mapping the i-th point, well, I just find the closest mu j to it, my current mu j. That's the way I'm doing the assignment. Every point goes to its closest neighbor. And I'm arbitrarily saying this guy is closer than mu 2, All right, now I have to improve my clustering because remember it's iterative. So what do I do? I copy it, so you can-- so I don't have to overwrite it, what happens. Well, what do you think I should do? I should probably take the points that I have and compute a new center, my new guess at the center of the clusters. So what I do is I compute new cluster centers. What does that mean? Cluster centers. So I'm going to move. I look at mu here. What's the new cluster center for that? Well, it looks like it should be somewhere here. Right, this is where mu 1 goes. And where should mu 2 go? Well, probably somewhere right here. That's kind of the mean of them. Now you can compute these things precisely, but I'm just drawing them just so you get an intuition. And I'll write this notation because there's more notation. So now I have new cluster centers. Now what's the next step? I repeat. So I go back and I assign every set that's closer to mu 1 or mu 2. So now I have these characters, and I have these characters there. OK, that's now doing step two again. I repeat step three. What's going to happen? They're going to jump to the spot in the middle, in the spot in the middle. Mu 1's going to be there, mu 2. And then there's going to be no more change if I were to continue running the algorithm forever. So let me write that out mathematically while you digest that statement. But that's the entire algorithm. So what does this mean? Mu j is going to equal 1 over sigma j. It's going to be sum average over all the points in sigma j, xj such that-- sigma j is just the set of points-- i such that Ci equals j. This is just some notation that says, these are all the points that are close to-- closest to the center j. And then I average over all of them. I just compute their mean, hence k-means. I iterate this thing, and I repeat until no assignments change, till nothing changes basically. Repeat until nothing changes. Notice that if the labels don't change in step two, then the mean is not going to change because it's a function of the labels. It's a function of what I guessed was in each cluster. It's always their mean. Any questions about that, nothing changes? Please. [INAUDIBLE] Yes, in [INAUDIBLE] and in many, many different ways. So let's talk about a couple of things that go right, and then we'll talk about what goes wrong. So the first way it could go wrong is, does it terminate? So if you imagine it was in some-- it's not obvious. I'm doing this kind of jumping around. It could oscillate wildly. There's nothing that like prevents it, at least at a high level when you're thinking about this, of going like, oh, 1 switched from red to blue, then it switched back from blue to red. And then the cluster centers are jumping around, and they're in some unstable kind of equilibrium. So that's something that potentially could be bad that happens. So that's the first question. Does it terminate? Does it terminate? All right, and it turns out this is great. Yes, it does. Now why does it? The reason is that this functional underneath the covers, That is, the distance between a point and its cluster center is actually monotonically decreasing. It's not increasing. So that oscillation can happen basically. And you can basically view the algorithm underneath this, k-means, if you really want to be kind of super abstract, you can view it as basically doing gradient descent on this object in a particular way. See the nodes for actually the convergence. So it converges to something. OK? Now in unsupervised learning, that converges to something is about as-- is kind of what we can hope for. In particular, you may also ask, OK, it converged to something. Does it converge to a global minimizer? Does it converge to a global minimizer? And if I did my job setting this up, your intuition should say, no. No, not necessarily. Now you may look at this and say, well, not necessarily. Maybe there's a better algorithm out there that given this setup of problem. Why did you pick this algorithm? It's a bad algorithm. You should show me a better one. There isn't a better one because it's actually NP hard. If you don't know what that means, don't worry. There's no hardness proofs in this class, does not really matter. It just means we don't know-- humanity does not know a better algorithm for this that runs in polynomial time. OK? And we have reason to suspect there isn't one. All right, so this algorithm, as I said, it's going to run. But sometimes it can end up doing things that are quite bad. In particular, it gets stuck by getting a cluster over in a region where it should have never been. And if it had more carefully searched, potentially it could have gotten a lower cost solution. All right, clear enough? Clear enough on its properties? Now this going to be a hallmark of what we do over the next couple of lectures. Almost everything we do, with the exception of one algorithm that we'll talk about, has this property that it doesn't necessarily converge to a global optimized solution. And as I said, this used to be much more disturbing to people. But now most of AI is in that mode, so people don't seem to care as much. It used to be quite disturbing. When I started, it was quite disturbing. Now no one cares. You guys-- you folks are old hats at this. Go ahead. [INAUDIBLE] converge to the global minimizer? What happens in that situation? So you can imagine situations-- for example, imagine our points that are really far away and then some other cluster of points here and that really there's enough kind of things in the middle that the optimal is actually closer to the line. Then what you'll see is that you could end up with some kind of local minimum where you pick a center way, way over here and a center in the middle of this. But you should have-- there was a kind of a better one closer. So that's very hand wavy. I can post some examples where you can actually run through and convince yourself. All that I really care to get across in this lecture, and I couldn't think of how to get good examples to look at there, is that this is a possibility. You shouldn't expect it to get you a global minimizer, and we have some theoretical reason why we don't believe such an algorithm could exist, even in the simple case. Yeah, wonderful. OK, now a couple of side notes. So these questions already came up. So these are really great. So they're not even side notes. They're just responding-- I'm just being responsive, but I was going to talk about them no matter what. The question was asked, how did you initialize? How did you pick these points? And from my argument, as I just talked about, even very intuitively and informally, the initialization matters quite a bit, right? If you initialize way, way in crazy ways, then you force k-mean to kind of jump back. And maybe there's only one point. Imagine there's a cluster of points. And you throw one of your centers way across the room, it's going to potentially pick off nothing. And then it's going to have to kind of crawl its way back in some sense. So the point that I care about you getting is how does the-- how does initialization matter? And I won't go into too much detail. But I want to tell you that there's one algorithm that was developed by smart Stanford students a while ago-- maybe 12 years ago, I don't know, 12, 15 years ago, maybe even-- called k-means-plus-plus. And I'll just tell you what they did from great Stanford students. And basically what they decided to do is that they-- where they compute kind of a density estimation. And then they place their centers with respect to the density to kind of spread them out in a nice way in k-means-plus-plus. And without going too much into the details, what ends up happening is they get an improved approximation ratio. So if you don't care about this kind of theory stuff, don't worry. But they're able to show that if you initialize in this way, even though it's NP hard to find the exact solution, they can find kind of a low cost approximate algorithm to it provably. So this is actually-- when we're talking about what could go wrong-- this is something that can go wrong. And what people do in k-means is they used to just kind of run it many times. But if you run with this initialization-- it's still a random initialization-- but if you run it a couple of times, you're going to get a pretty good solution. And because you're already kind of looking for an ill-defined kind of objective, that turns out to be about OK. Now weirdly enough, I used to-- as I said, I didn't really like this stuff. But I wrote a paper about using some of it inside some of making these machine learning models robust maybe a year or two ago. So it is something that I've actually used and care about. It does work, and you use it as a way to inspect your data often. You look at your data set, and you kind of do your k-means clustering. Then you can use that to figure out what are the different groups that are inside. All right, this thing here became the default in scikit-learn, in sklearn. So these folks wrote this beautiful paper. They proved this nice result. And now if you run scikit-learn and run the k-means algorithm, it defaults to using the-- excuse me-- defaults to using the k-means-plus-plus initialization. OK, so pretty fun. Sergei and folks did that. Anyway, all right, so now there was another question which was, OK, how do you choose k? Right, this came up both in the ed thread, and it came up in the lecture hall itself. How do you choose k? And here's the problem with choosing k. There's no one right answer. We'll see other unsupervised algorithms which do give a right answer, but let me just illustrate this for you for one second. Suppose I give you this data set. OK, there's two copies of it. One reasonable clustering is this. Two clusters. Equally reasonable-- excuse me-- is this one, four clusters. Now if you think about the algorithm as giving you the right answer of what structure is inside your data, then you really don't like this answer. You say, well, what's the right k? Well, it's not the right way to think about it maybe. The right way to think about it is, given that you have a k, can you use this as a tool to find the clusters that are in there of different sizes? So if you're looking for five clusters, and you really only found-- you look at the five cluster and the four cluster map, you have to make some judgment about kind of which one is better underneath the covers. And that's why unsupervised learning is kind of squishy, but it also turns out to be tremendously powerful. So we can automate the loop of finding the clusters. But which one is good or bad, you're going to have to use domain knowledge for that in most situations. So this is really another way of saying this is a modeling question. So the one part-- even when I didn't really care for-- I don't know I'm telling you the history of how I feel about lecture 11, but you're getting it. But one of the things that I did like about this lecture always or this part of the course is that it forced machine learning folks to move out of the comfort zone of there's a right answer and optimization will find it, which was very disturbing to me because that's not at all how machine learning works. And what you're doing, you're constantly in this regime where you're checking if your model works or breaks or does things. I don't know how much Tengyu subjected you to my crazy slides. But that is the way that you live when you're building these kind of ML systems. You build a model. You have no idea if it worked. You have to check it. You have to look at it. You have to inspect it. You have to measure it. So this is kind of natural, and at least forces you to do that. OK, that's all I wanted to say about k-means. I want to jump to the next algorithm, which is building towards this larger theory of what we did. And if you can kind of squint, you'll see that it parallels how we taught the supervised learning case. But any questions before I move on? Awesome. Please. Is this something you might use if you're doing supervised learning but with labels that you're not sure about? Oh, awesome. Yeah, great question. So let's say that you're unsupervi-- so the way we were actually using it is exactly in that way in this paper. What we were looking for-- I'll just to be really concrete is-- we were looking for what are called hidden stratifications. So these are times when a machine learning model-- let's say that it's classifying birds that are on the land versus birds that are in the water. I have more elaborate examples on this. And it turns out that the model sometimes gets confused by the background. And so what you want to do is you want to cluster some notion of those feature descriptors underneath the covers, of like background descriptors that the neural net has learned, to try and find a group which kind of looks like an oddball from the rest. So it's like, oh, here's all the land birds on land backgrounds. I got them all right. And there's this other weird cluster there that I labeled as land birds but actually are water birds that happen to be walking on land, right? And so that's the kind of thing you look into your labels or your pseudo labels or your predictions and try and find those. They can either be labels that are given to you by a human or a neural net. And that is one-- that was the use case that I had is exactly what you thought about. Yeah, so that's one of them. You'll typically also run this when you have-- honestly when you have data you don't really understand. So you run an experiment with search traffic or ad traffic or something, and you're like, what the heck are people asking about in this segment? And then you run kind of a clustering to figure it out and say, oh, this looks like a topic that people are talking about x, y, or z, right? So those are the other times that you use it. But noisy labels is a good thing, and that will be a theme for next week's lectures. Great question. Please. [INAUDIBLE] Yeah, so the question is, if you pick a larger number of clusters, then-- if we go to this example here, the problem is, how descriptive does it get? So in 4 and 2, you could kind of eyeball the 2 and the 4 and link them up. But if you start to really jack k up to 1,000, then it is kind of at the brink of what does it mean to find meaningful concepts? So if you're looking for 10 things, and you know there are 10 things, you want to find those 10 best groups. If you start to jack it up to 1,000, 10,000, it no longer becomes human digestible. So that's where the modeling thing comes in. It's kind of like the bandwidth of what you can look at and do this. And traditionally k-means was not used in an automated loop. It was traditionally used as a way to browse your data. If the consumer of that is some neural net in the discussion we were talking about, then it kind of doesn't matter. You can jack it up a little bit more. The advantage of doing-- so the advantage of doing that is if you put too many centers, they may be too close together. And sometimes what people do is they'll just do kind of a follow-on path where they merge some clusters that look like they're kind of not informative of a pruning heuristic at the end. So your intuition is dead on. And that's the side you want to err on. If you err on the side of two small clusters, what happens? You lump two things together that you could have distinguished. And then you could get a weird cluster in the center. So clearly you're right. The overprovision setting is much nicer than the underprovision setting. Awesome. Sounds like folks got it. Please. [INAUDIBLE] combined clusters [INAUDIBLE] to do the [INAUDIBLE] is there an algorithm for doing that? Yeah, so I wish there was one algorithm. So this is something that people are writing, still writing, research papers on. We had an ICLR paper on. There's another one coming out from some other-- from another group, which is great, that we're reading right now. Basically what people try to do in these things is that data point. So this is kind of an impoverished setting, right? We have data points that are just in RD. That's their inner vector space. But those data points usually correspond to something real, like there's an image or text. And so the more modern ways that people do with this diagnosis, like what Sabri and folks did and Maya did and Domino was they said, oh, we took those points. And we embedded the text that's associated with an image and the image itself. And we start to do kind of search queries on top to find the slices that are underneath the covers. So this is the purest setting where you're just like, I have data from an inner vector space. But of course, with modern methods, there's an underlying object. It's a transaction or an image and a caption. And then you start to use some of the more modern techniques to search through it. And then, of course, you visualize it or monitor it. And there are a number of tools that are out there to do this. I wouldn't say it's solved, like, oh, you just use x and you're done. But there are a number of tools. But yeah, you do have to look at your data. There's nothing that really obviates that. Awesome questions. All right, so let's talk about a slight-- what appears at first to be a slight generalization. Hopefully it feels like we just relax one little thing, but it gets us closer to a more fancy model that will allow us to do some really fun stuff. And this is actually pretty fun. So this is a toy example from astronomy. And if you know about astronomy, please don't humiliate me too badly. Feel free to correct me. I have no idea what I'm talking about. But I read the paper, and I understood the math, so who cares? This is a toy example, and it's from a paper from the University of Washington. So I can find the paper again if people want. So here's the general setup of what they're doing. They have this light detector, this photon-counting thing. And they're looking in the sky at various different things. And what they want to do is they have, in this simplified model, there's kind of two celestial objects that could be out there. It could be a regular star and a quasar, OK? And one of them is going to take the light, and it's going to spread kind of evenly in all directions. And another is going to send out relatively concentrated pulses, OK? And what happens is unfortunately when those photons come to Earth, they don't come with labels. No one gets to like affix a metadata packet to every photon that hits us. We have to see the photons and figure out where the hell did they come from, OK? So what happens is we get some data that looks like this, maybe like this. And these are all our little photons that are there, so quasars and stars. Now and both of these things emit light, but all we get to observe are these photon hits. All right, so now what we're going to do in our cell-- oh, please go ahead. [INAUDIBLE] No, we don't have to assume a linear decision boundary. We just have to assume that it's close in distance. You can get nonlinear leads. There's no notion of a decision boundary in particular anyway. You can draw a little Voronoi diagram, but we didn't talk about that. Yeah, very nice question. Awesome, awesome. So we get these photon hits, OK? So what we want to do is kind of figure out what these different sources look like. And so what our given-- this is what we're given. What we're going to do is try and find a soft assignment of these things into Gaussians. So remember what Gaussians look like. They're basically-- it's a high probability they look like ovals. They can be circles. That's what a two-dimensional Gaussian looks like, right? And the variance tells you how stretched it is in each dimension, OK? So maybe there's a cluster here, cluster here. And this is a tight cluster. All right, so we'll talk about that in more detail in one second. But what we have to do is we want to assign each photon to a light source. But of course, we don't-- actually, in this case-- we don't have perfect information on the light sources either. And so we're going to settle for what we call a soft assignment, OK? So this is the probability. Prob that point i goes to cluster, goes to source j, OK? And this is our soft assignment. This is called a soft assignment. Why is it soft? Because we didn't-- remember in k-means, we said, you must be in this setting. Here we have some probability distribution over it. So just compare that with k-means. All right, so what are the challenges here? The challenges that we're going to deal with here potentially are there are many sources. If there were just one source, life would be easy. We would kind of fit the Gaussian. You know how to do that from GDA. You take the mean. You find the variants. You're done. There are many sources here. But for now, we're going to assume we know the sources k. We know k, which is the number of sources. OK, let's solve the problem in that setting. And also, the other problem is the sources have different intensities and shapes. Intensities and shapes. So when I was wildly drawing those ovals, the reason was, is like, oh, maybe there's a Gaussian that looks like this. Maybe there's a tight Gaussian. I don't assume I know that ahead of time. I just know that they're somehow well-described by Gaussians. Please. I guess k is the [INAUDIBLE] Oh, no, great question. So those are the two types of things. I need the number of celestial bodies. So one reasonable thing would be three. There's one here, one here, and one here. But k is still going to be picked by a modeling assumption. So this curve is hard to generate with a Gaussian in our current setup. So there's probably not-- there's probably more than one on that side, but that's just intuition. There could be four there, five there. I don't know. So let me talk about the assumptions. So we're going to assume it's well-modeled. And here we're in-- most of what I'm going to do is going to be in one dimension. So I'm still just going to introduce the notation in one dimension by Gaussian. And you remember that a Gaussian means mu j sigma j squared. So we're just one dimensions to make our lives similar in a second. But we do not assume we know how many points, but there's an equal number of points. Number of points is equal. Now the reason is, in our setup, the physical reason is at some point, sources are really concentrated and strong, and they shoot out tons of photons. And in other cases, they'll shoot out diffuse photons over a region. They may be farther or closer away, different energy levels, whatever. Mathematically, what I mean is we don't know, when we're going to talk about these Gaussians, the shape may be a Gaussian. But it's only sampled infrequently for source one and sampled 90% of the time for source two. This is formally known as an unknown mixture, OK? Now one thing that's nice about this problem in the physics setting is once I get the values out-- so what's the do here-- I have to get out these cluster centers that we talked about before and these probabilities, which I haven't given you a notation for. But they're going to be called phi in a second. I can check that at how physically plausible that clustering is, right? So in the k-- the reason I like this example is-- in the k-means, you're going to kind of have to eyeball it. But here if I have a physics model, I can compute how likely is one clustering versus another, given the data sources that I'm seeing. Maybe have auxiliary information about what celestial objects are in the sky in that particular region, and so I can check this information, OK? And the physicists in this example could actually check the information. Please. What's the range of j? The range of what? Of j. j, oh, j is going to range-- the same is going to range over each number of sources. So we're going to assume that we have this k number of sources here. So j will range over k just like it did in k-means. So that;s fixed. That's a parameter of the problem. Someone's going to tell me-- basically what someone's going to tell me is there are k sources. I don't tell you what their means are. I don't tell you what their variances are. So that's for each one of them. And I don't tell you how often. That's this mixture. They emit light. So what percentage of the points go to source A versus source B? Please. [INAUDIBLE] the assumption [INAUDIBLE] So for example, let's imagine that we had a situation. Let me just draw it. So let's say that we had three points here. And then I'm not going to draw them all but like 1,000 points here. So there are three points here and 1,000 points here, right? So the clustering that I would want out is one cluster here and one cluster here. But I don't get to see the 3 and 1,000. I'm assuming that they're of unequal sizes. So you could, for example, assume that all the clusters are equal sizes. I'm trying to-- and that would be wrong in this-- it's not the way we'd have the model. That's a fair assumption if you knew that, right, but that would make your life a lot easier. It doesn't make-- you have to solve both of these two things. Awesome question. All right, please, yeah. [INAUDIBLE] Yeah, we make that assumption kind of implicitly because we don't say how often-- there's no probabilistic model. But we do allow k to have 5 neighbors and 500 neighbors, right? Center cluster 1 could have 5 neighbors, and another one could have 500. So it does parallel k-means. So in that sense, it's the same assumption. I'm calling it out here as an extra parameter that we didn't have to think about last time, right? Awesome questions. OK, let's see some 1D mixture of Gaussians. Mixture of Gaussians is a very fun and famous problem, by the way. People work on which ones you can solve in theory still. It's a fun-- it's actually a fun and interesting problem. We won't do that. All right, so I'm going to be in 1D for simplicity here because it just makes my notation and drawing easier because I have to draw distributions. All right, so just make sure it's super clear what we're doing. I think folks sound like they have it. So we have 1D. This is going to be the place where our points live. And then imagine there were two Gaussians that were generating data. Now when I say this is generating data, this is the piece-- when I talked about we were going to talk-- we're going to allow ourselves to talk about models, we're going to imagine that there's really a model that's underneath our data that's generating it. This is quite a powerful technique. So how does it work? What we're going to do, or our idea for the model, is we're going to pick cluster let's call it P-- or cluster 2 with probability 1 minus P. If we pick cluster 1, we then sample a point. Oops, let me get a thicker thing. Oops, that's probably too thick. And we get a series of points, OK? That's the thought process that we're going through. If we only knew those parameters, we could sample from our model. If we knew the probability to pick cluster 1 versus cluster 2, and then we would sample from it. And there's many different parameters, given that model, that could have generated our data. And we're interested in recovering the ones that are most likely, that most likely generated our data. Now what we see unfortunately it's just this, just the points. So this is the model over here. This is what we observe. Oops, I've got to switch back. Just the points in R. These are the xis. Now one thing, and this is going to seem like a trivial piece, but it actually makes our life kind of interesting. What if we knew these points all came from cluster 1, and these points came from cluster 2? This is our first observation. Well, if we knew the cluster labels, what would algorithm would we run? if we just ran something like GDA, we would just fit Gaussians to this. We'd be done. So if we knew this, we could just solve this instantly, solve and fit Gaussians. So we compute mu i, and mu 1, mu 2. How do we do that? Well, we would just average up all the things in here, average up all the things in here. That would be our mus. And we could even compute the sigmas and be done. The challenge is we don't get to see them. We don't get those labels. Challenge, we only see this thing. This is our first example of a latent model. There's something hidden that we don't observe. And we're trying to estimate the parameters of that latent model. And that will be a theme for the next couple of lectures. Please. [INAUDIBLE] Sure, yeah. [INAUDIBLE] should the data be more than 2 for this [INAUDIBLE] Oh, so this is, yeah, so this was an example to motivate. Like, hey, this is like a nice picture to look at and kind of get you thinking empirically about what happens. I'm going to walk through most of these things in 1D because it makes my notation a lot easier. And so there's really no relationship between these points and the points I showed you before. Yeah, great question. Awesome. All right, so why am I doing it in this kind of piecemeal way? It's because when I show you the definition, then it has a bunch of notation in it. And I want to show you the simplest version so that then you can generalize it and do all that fun stuff on your own. OK? All right, but is it clear what goes on here? There's this notion I've introduced and kind of sneaked into you, which is a model. There's some parameter P here that picks this side or this side, right, which is going to be called theta, theta 1 and theta 2, hinting later. I pick something, and then I sample from it. If I knew where the points came from, I could just solve. But the problem is, I have this hidden-- how often is everything sampled? I don't know how many points should be in each cluster. And I don't know which points are in the cluster. Not only do I not-- I don't know either of those facts. So we have to estimate them somehow from data. That's what we're going to do. OK, so let's see some notation. We can come back to this example if there are more questions about it. OK. So let me be a little bit more precise. So we're given x1, xn, element of R, and a positive integer k, and K, positive integer or number of sources. And our do-- what we're given, what we have to do is we need to find that probability function. Find P such that for i equals the clusters. This is the data. We have an estimate of P zi equals j. This is our notion of soft assignment. OK, so that's what we're responsible. And this is-- although it's a function, it's discrete. So I can write it down. I can just write down the probabilities in a table. We'll later worry about cases. That's not the case, but for now, we do. Make sense? [INAUDIBLE] No, so j is a cluster index, and k is the number of clusters. So j here says-- this says the ith point belongs to cluster J with some probability. j ranges over 1 to k, so they are k clusters. [INAUDIBLE] No, k was equal-- we never actually were precise about what k is. We had rough intuition that k should be at least 3. And I don't think we actually used j. Or if I did, I meant and scribbled it incorrectly, and I apologize. Probably was a wayward i. I don't see it in my notes, but apologies for that. Sorry for the confusion. Please. [INAUDIBLE] of z of i means that the sample i belongs to cluster z? Belongs or comes from, yeah, so I'll copy from here. It's this sentence. Probably that point i belongs to, I think it should be-- I think is what I meant to write. I think that's better. Point i comes from source-- that makes more sense in English-- from source j. Yeah, source j is the particular source, always the particular source. So I is always the data. j is always the label for the cluster. And k is always the number of clusters. Please. [INAUDIBLE] That and x. Oh, you mean from the-- Is that an x on-- [INAUDIBLE] No, so, yeah, so zi is the assignment. So zi is the soft assignment itself. xi is the data point. Yeah, sorry for overloading the notation. So the zis will always be the hidden piece that says, the-- so let me write the next sentence, the next piece. And then we'll come right back to your question. I can see why you'd be confused by that. So let me write the GMM model, and you'll see why these two things are being used according to the GMM model. So zi is just the-- zi as the random variable that this probability, that point belongs to it. It's not the actual point itself. So then for example, we can talk about the probability of xi and zi. This is the probability that given this point and the zi is that it belongs to a particular cluster. And we'll write down the model in one second. So let me write down the model, and then it'll be clear what this notation means. OK. All right, this is just Bayes' rule, nothing fancy here. I said nothing. There's no conten, OK? But I'll use it. Now zi is going to be distributed as a multinomial according to some parameter that we'll call phi. And phi as a multinomial is-- says that sum phi j-- j goes from 1 to k-- equals 1. And phi j is greater than or equal to 0, OK? So zj, if you like, is that sampling probability. It's the likelihood that this particular point came from that place. And zi is going to be picked with this background probability that we're calling P before. So in our earlier example, there was P and 1 minus P were the chance that I sampled from cluster 1 or cluster 2 above here. These are phi 1 and phi 2, so what I was hinting at. OK, and remember, in our model, I picked zi. That told me where I was sampling. And then once I knew zi, xi given that zi equals j. That means given that it was in cluster j, this is going to be distributed like a Gaussian. mu j sigma j squared. OK? So these are all Gaussians. OK? Now let me highlight for you the parameters we have to estimate. Highlighted parameters. OK, for this thing. OK, all right, these are the same color. So what's going on here? This is formalizing the model that we talked about earlier. It's our first kind of hidden model. We never see zi. We don't observe it in data. We don't get to see the labels. The deity or whoever is generating this data generates a zi and picks one of the clusters. That's its sample for this point. And then it samples from this normal distribution to generate xi. And if you like, the data we think about as being the remnants of this process. We don't get to see this process run. It gets run ahead of time, and then the data is dropped there. And the reason we're thinking about this model is, we say, OK, assuming that the model were like this, could we recover the parameters? And so that's the sense in which we're assuming there's some structure. And it's pretty reasonable. What it's saying is, look, I don't know how many points come from every cluster. That's because I have a multinomial. And two, I know that the points that I-- the clusters that I have, they're Gaussian-shaped. But I don't know if they're circles or ovals, and I don't know where their centers are. And I'm assuming nothing else about my structure. Can you find it for me? That's your job. Does that make sense? You have to come up with-- we have to come up with these parameters. We have to find the cluster centers and the probabilities that each are sampled from observing the data. Clearly if I pick one clustering, it's going to-- the data is going to be very unlikely to have to have been generated with those centers. If I go back here, and I picked a center over here and a center over here, that's less likely than if I put the center here in the center here. It would just explain the data less well. We'll be able to formalize that with maximum likelihood, but I hope that intuition is clear. If it's not, please go ahead, ask me a question. I'm super happy to answer. [INAUDIBLE] Ah, so phi is basically just a bunch of numbers that sum to 1. It's a multinomial. So this is a probability I-- if I have cluster 1-- so let's say this is going back here. Let's say this is cluster 1, cluster 2, cluster 3, or source And lots of points are in source 1. So then maybe phi because they're like 70% of the data is there. And 10% of the data is here. And 20% of the data is here. Then the numbers would be 0.7, 0.1, 0.2 roughly. Does that make sense? Yes. Awesome. All right, so let me-- let's do one example. I just want to-- I'm trying-- let me see. Call the zi latent because we don't observe it. We didn't get to see it. We just got to see the points. We didn't get to see which was assigned to which cluster because it's not directly observable. This concept, which seems at this point kind of strange, I would think about a bunch. And we'll see it again and again. This is what we mean by structure. We're like, well, there's some wild collection of points, but there exists a small number of clusters that are generating them. That's the mathematical embodiment of what-- this thing is the mathematical embodiment of that intuition. That's all it is. And we say, if that's the case, then we should be able to recover it. We should be able to recover those clusters in those situations. And in the physical situation of light sources, that seems pretty plausible, right? They have different intensities, different shapes, and so on. All right, so let me give you one more example just to make sure that all these terms are-- this notation is there. And I want you to think in sampling. And let me just walk through, so hopefully these things make sense. So phi one is going to be 0.7. Phi 2 is going to be 0.3. Why those numbers? Because I'm making them up, that's why. I don't have another reason. Phi one is going to be-- mu 1 is going to be 1. Mu 2 is going to be 2. And I'm going to set sigma So what does that picture look like? Here we go. There's 1. There's 2. There's one thing here. If I draw it pretty well. Maybe my drawing is off, but hopefully you get the point. This thing should be about 1/3. This distance should be about 1/3. The fact that they're uneven is because I'm a bad artist, not because that's the intention of the thing. This is mu 1. This is mu 2, OK? And this distance here should be about 1/3, the 65% thing. It's the standard deviation, OK? How do I sample from it? First I pick a cluster. Pick 1, either 1 or I pick the first cluster with probability 0.7, the second with 0.3. Then I pick the relevant mean. So if I picked 2, I will use mu 2. I go over here. Then I will sample from a Gaussian and generate myself a point. It'll be here. Two, use the appropriate Gaussian. OK? So imagine that this were the process that were generating your data. You just get to see an instance of the data. Your goal is to say what's the most likely process that generated it. OK, and that corresponds to this intuition. There exists some clusters. So far, so good? Oh, and then you repeat. Repeat. Yeah, please go ahead and ask questions. [INAUDIBLE] No, yeah so, that would be if I wrote their probability distributions entirely. These are still unit-normal Gaussians. This is a great question. I drew it as two Gaussians. I didn't draw the phi 1 and phi 2. I incorporated them just in this first set. So I would pick phi 1, I would pick one with probability 0.7. Pick one 0.7. And you could imagine reweighting them and normalizing them. But then it gets a little confusing visually, and I'm not that good of an artist. Please. [INAUDIBLE] Because we don't know how many points come from every source. So remember, going back to this one up here, we had like the-- oops, I went way too far back. Here we had lots of points from this one source. And this point, there are 70% coming from here. And so they-- we're just not assuming that they have the same. If we force them all to have equally-sized clusters, we would probably put two cluster centers here if this were 70% of the points. And we would use that to explain our data. And that would be, in this case, suboptimal. That's just our modeling choice. We know that the clusters have different number of points in our application. So that's why we fit them. We can't assume they're the same. If we knew them perfectly, we knew their things, we could sneak them in there. But we'll talk about where we sneak them in later. So I guess following up on that, the phi represents this cluster sizes, and new 1 and new 2 are-- and then we get this standard deviation [INAUDIBLE] The shape. That's the exact distribution of [INAUDIBLE] The shape, really. The shape. Yeah. Awesome. Sure. Oh, so basically we have a problem. Mu 1 is greater [INAUDIBLE] Because I'm a bad artist. Yeah. That's kind of good, man. I mean-- Yeah, but should [INAUDIBLE]. Sorry about that. No, that's fine. [LAUGHTER] I was messing around. Sorry about that. But should the second peak be actually shorter than the first peak because-- Yeah, that's a great question. So you could imagine folding the phis in to try and make them both probability distribution density functions where there was one distribution density function. And then you would fold them in. I've drawn them crudely having the same height and putting the probability function up front to say these are the Gaussians and then there's the height. But you've got it perfectly. Yeah, that's another way to visualize it, which is if this is beyond my artistic abilities, that's regions beyond. So yeah, wonderful question. No, no, [LAUGHS] that's fine. I do not have a lot of my ego tied up in that. Yeah. I'm just kind of confused on the processes. So at this point, this is after we modeled our Gausians. So now we have the Gausians, and we're evaluating? Yeah, awesome. So what's going on here, the trick that we're going to do is we're going to assume-- OK, so every time you set one of these values, you give me a distribution, right? That gives me a distribution. So now I have an infinite number of these things that are out there with different settings of phi, different cluster centers, different samplings. So now imagine you had-- like I grabbed one of them, and then I did the sampling process. And it generates some data. Great, so we understand that piece. Now what happens, though, in our problem is we're going to invert that process. So we see some data, just the xis themselves. And now we want to select among all of those infinite things that are out there, which are parameterized now by these phis and mus and sigmas, which one is most likely the one that generated our data. And intuitively we know, as I said, if the cluster centers are super far apart, and our data is all in the middle, that one's probably less likely than one that has the cluster centers closer. We're going to be precise in a second about how we fit that. But you can kind of see where it's going to come from is this model. So in the same way we use maximum likelihood before-- we'll get there, we're going to do a little bit more intuition-- we're going to say that the parameters that generated our data are the most likely ones after we've seen the data. So we'll condition on the data and try to invert it to find it. So this thinking of the latent forward process is super weird, right? And so yeah, you're exactly right. Does that make sense, though? We think about the process. And then we're like, oh, if we inverted it, that's kind of what it must have been. Yeah, go ahead. [INAUDIBLE] Yeah, please. So we kind of almost guess parameters and give us the various sets of different parameters like sigma 1 [INAUDIBLE] from the most closely happens. Exactly right. The thing is, is we want to be-- exactly right. We can just guess all the parameters or try all of them. Problem is there's infinitely many. So the question is, can we solve them and find them faster than that? But if we could just in theory, there's-- well, in theory is a weird statement because it's uncountable many of them. But we could conceptually try all of the parameters and then see which one was closest in probability, had the highest likelihood score. And we're going to try-- and that's going to be our gold standard of what we want to pull out. You got it perfectly. Please. But there's only value for B. [INAUDIBLE] I'm sorry, I didn't catch the last piece. B only has one value. Right. Always 100%. [INAUDIBLE] Yeah, in that case if there's only one value, then you know everything came from one source and you're just fitting a Gaussian. And the most likely estimate for a Gaussian is, as you saw from GDA and everything else, is average your data to compute the mean and then can compute the variance from that. And then you have it perfectly. Wonderful. Awesome, we've got this. All right, so let's see the algorithm that does this because it mirrors our friend, k-means. OK, we're also studying this, by the way, because this same pattern will repeat itself in our next lecture. It's a famous algorithm, so it's fun to know about. And it's important pedagogically. I don't know if you actually have to do anything with it in the class. And it mirrors k-means. So like, why are you teaching me these two things? They seem to differ in small ways. And that difference-- k-means seems a lot more intuitive than this whole infinite number of models that we're selecting among. But we want to get to that world view so that we want to relate the two, OK? So one, there's an E-step. So this Em is very famous, and we'll come back to that in a second. Here we guess-- just as you put it perfectly-- we guess the latent values, which in this thing are the values of the zis. So we guess all of the cluster centers, their distributions by hook or crook. We just figure it out. That's our first piece. We'll see how we do that in a more intuitive way. And then the m-step, we update the other parameters. So if we knew the distributions, we knew-- in my observation one, if we knew where every point came from, then we could just run GDA on it. We could just run, find which Gaussians are in each set. We just fit all of them. There's a slight twist that we're going to not know where precisely every point is. We're going to know them-- distribution. So we have to do something a little bit more complicated but not too much more complicated. OK, this is our first example of a very famous algorithm. The first example of Em. Em is like a very, very famous algorithm that people like used for decades. I don't want to oversell it. I have a colleague and friend who says, you only run it when you don't know what you're doing. And he's right in the sense-- he's kind of a curmudgeon. He's a good dude. But he's right in the sense that if you knew exactly what you were looking for, you wouldn't run this algorithm. But that's precisely where it's interesting, is when you run these kind of Em algorithms. You know something about the setup. All right, so let's see mathematically what this looks like. And in your head, you should be thinking, how do I-- what did I do in k-means? The E-step. So here we're given the data and current values, which are guesses, for phi, mu, sigma, blah, blah, blah, blah, all the stuff. Sigma 1, mu 1, mu 2, whatever, Mu 1, all the parameters. OK? And our do is we have to predict zi for i equals 1 to n, Now I'm going to introduce some notation here following our notes. Given xi by mu sigma. So what's going on? This is our goal. OK, so what do we want? We want to compute these weights which I'm just giving a new notation to because zs will be changing as we run and all the rest. We're given the data point, conditioned on that. So we know the point we're looking at. We all the rest of the parameters-- the likelihoods, the frequencies, with which we're sampling from each one of them. We have our current guess. We have our current guess of how-- where the center of every cluster source is and its variance. And now what we want to do is we want to compute how likely a particular point is to belong to a cluster. Intuitively, if it's really close, the probability should be high to the cluster center. If it's a really-- if it's a really far away, it should be close to zero. But it's continuous. It's not going to be 0 itself because the Gaussian doesn't go to 0 anywhere. So how do we do this? Well it's nothing more than Bayes' rules. Bayes' rule. xi. Oops. Sigma. Oops, over-- all right. All right, so all I've done here is something really simple. I've taken Xi, which I've conditioned on it. And I've kind of divided it up. So this is kind of the likelihood of the data being generated with this, which is a probability. And then these two things being jointly done together. And then I'm going to factor them out as I sum over all of the different probabilities. So this is going to be equal to probability of xi-- oops, probability of xi. zi given j, probability of zi j over sum. I have to you use l here instead of j because I don't want to confuse what we're doing. zi I equals l, comma, the whole thing. P zi equals l. OK, so the point is, we know all of these functions. Oh, sorry. And everything here, by the way, has this stuff. We know all that information. So that's everywhere. OK? So we know all of the estimates and our guess. So what is this probability right here? Well, if we knew that it was conditioned-- that we knew that this point came from the cluster, well, it's just nothing more than our friend, the Gaussian, right? This character right here, which I'll highlight, this is just a Gaussian. We know that from the model, right? So is this. So I can be really explicit about it. It's like x above xi minus mu i squared over 2 sigma i squared, times some normalizing constant. So we know what this is. What's this one? Those are our phis. So this character here-- oops, I shouldn't use that color. This is phi j. This is phi l. So I decompose the problem, once I know it, into estimating all of these quantities, which I already kind of mechanically know how to compute. So the key point is we can compute all the terms. Compute all the terms. Now this isn't super surprising. So maybe you'll look at this, and you're like, wow, that's just a bunch of weird notation. What the heck is he talking about? Look, this is just k-means. It's saying, in some kind of generalized sense, it's saying I need to compute a probability distribution function. OK, k-means doesn't need to do that. But how does it do it? It says, well, I consider the probability that this xi was generated from this j. I know how likely it is that the point came from it. I know given how likely, once I'm in the point, what the probability of this data point is. And then what I'm going to do is I'm going to compare it to the likelihood from every other cluster. And that's the probability. Right That's all it's encoding. Instead of having that hard assignment and saying which one's closest? In k-means, this was just a map. It said, look at all the other clusters, and you pick the closest one. Now I'm going to average over it, and I'm just averaging over it with Bayes' rule. And it looks like notation. It's a bunch of notation, but it's not super scary. It's just things you know how to compute. Go ahead. If the point is very clearly [INAUDIBLE] and has no [INAUDIBLE] with other cluster functions, then would the probability [INAUDIBLE] They wouldn't be zero, but they'd be very close to zero. So what would happen in that scenario. So in your point, let's say xi is really close to this particular j. So this thing is very-- it's not going to be 1 because the height of the thing is not exactly 1. But it's going to be some high number. Let's say 0.5 or something, so some big, big number. Then over here, let's say phi j. That's the other term. So if phi j were equal across all the clusters-- we'll come back to what happens when it's not in a second-- if it's equal across all the clusters, we get 0.5 times There's five clusters, 1/5. And then we compare that likelihood to the rest of these. And because you said these were really, really far away, what happens here? You get this term repeated, right, so you get exactly that one term plus a bunch of things which are super close to zero. And so as a result, this thing will be very, very close to 1. Yeah, does that make sense? Now what happens, just in that inference, if phi j were very, very close to 0? So now there's this trade-off. If the point is-- there's the cluster center, and it's close. But I think it's extremely unlikely. My phi j is like and I've only seen 1,000 points. I still don't consider it very likely that this source actually generated something in that setting. And that's all Bayes' rule does in general is trade-off those two things. Does that make sense? Yeah, awesome. OK. All right, now back to the-- oh, please. I just have a question because I'm a bit confused on the fact that phi j and phi l, those are actual probabilities, whereas the probability of xi is the probability density. Yeah, so I use them-- so yes, so you can do this because you can use the likelihood ratios in this way. And this is the correct thing to do from Bayes' rule, but it's a little bit sticky because this is just a PDF. But the fact that it's like normalized up to it allows this thing to go through. But yeah, you're exactly right. It's a little squishy, but it works. Wonderful question. Awesome. So I hope what you got from this is-- and also I wanted to come back. So that whole weird rant about how the data are generated is now mechanically the way the math works. That's why I keep ranting about it. You pick a point. You generate the data. That's like assigning the likelihood score. When you wrote the generative model, you told me how likely it was that given you were in a particular cluster that you would generate this point. That's the normal distribution. And then you compare and bake them off against each other. And that gave you a probability distribution. That's it. Awesome. Now there's the other stuff. This stuff is much less interesting. So here we're given all those Wijs. J which is our current estimate-- est of P zi equals j, for i goes from 1 to n, j goes from 1 to k. And what we have to do is estimate the observed parameters, the nonlatent parameters. Now, this is conceptually interesting because we split our thing into latent, which are not observed-- the zis are not observed, what the probability is that you are in a particular class-- and the things we did observe-- the frequency. You know how many are in each class. You know their cluster sizes and all the rest. You can measure those things. And we'll do that using MLE because that's the tool we use. By the way, we use MLE a lot in this course. But it's not-- it's very powerful, but it's not everything. I'll come back to that later. But it's a principle. So for example, what is phi j? What is the estimated frequency? Well, we sum over all the points. i goes from 1 to n of Wji. They're our guess of the fraction of elements. And from cluster j. We can make this a little bit more rigorous, and we'll do it next time. And the point is you just do MLE. I don't want to go through these calculations because they're kind of boring. But you go through them at least once. And we can make that rigorous. And we'll do that when we do the MLE thing. So I'll do it in class. So don't worry if it doesn't stick now. We'll do it on Monday in a little bit more generalized setting. But what I hope you get here is, if I know the Wijs, I have to compute the estimate of phi j. And then I just average over the probabilities. And we'll make that-- we'll do all the math to expand that out. But you already know how to do this. This is just the ML stuff that you've been doing for the last k weeks. So far so good? Please. So you're going to do this, I believe, in next class? Yeah. OK. No, we're going to-- we're not going to be-- this is the right answer. We're going to derive this in more generality, so you can solve for a larger class of models on Monday. Oh, OK. So it's fraction of what from clusters? Oh, fraction of points. Sorry, that's really terrible. I said elements, and then I don't know why. These are the points. Points from-- this should be source. I kept using-- I was trying to make source and target independent in my head, and I did not use them carefully in this lecture. Apologies. Fraction of points from source j. Is the rough intuition of why this is k-means kind of similar to you? You pick a set of means, which are like your sufficient statistics to describe your problem. Then you re-average over them. And what's going on here is, you're picking a set-- now instead of just picking centers, you're picking distributions. Then what you're doing is you're reweighting the distribution, say, if that was really the background probability of all these linkages, then this is what you're most likely cluster sizes would be. This is how much points you would have in each. OK, does that agree with your guess? And you just cycle that again and again until you get back to the guess, the same way you're doing k-means. Awesome. All right, so-- What other Em models are we going to study? A bunch. [LAUGHS] Yeah, a bunch. Yeah, so basically an Em model can be applied whenever you have a latent z like this, and you want to do some kind of decomposition to it. [INAUDIBLE] Yeah, it's going to have this two-step pattern where you have a latent variable. And then, I mean, it's not necessary-- there's no requirement that any of this be supervised. In some sense, the latent variable is your guest at supervision. We'll see ways to inform. So for example, we'll see an algorithm where the first way it was derived actually did use Em. But there's a more clever way to solve it provably. But Em is the general form of, there's a latent variable that you don't see, and you have a model for it. You estimate that parameter, and then you solve a traditional supervised machine learning or kind of estimation problem under the covers. And that's very, very general. Your latent variable here is clusters. But we'll see. It could be distributions and all kinds of fancy stuff later. Please. So in the other stuff, we see that we estimate [INAUDIBLE] how do we use the MLE? So I think [INAUDIBLE] already assume that we have an initial grasp of [INAUDIBLE] Yep, so no, so-- oh, great question. I see where the confusion comes in. No, so here it's like we are almost memoryless. So we had those phi and sigma and all those other things. And now just like in k-means, we throw away some information. And we try to reconstruct it. And here, given our link of Pis, we try to compute all of those observed parameters, which are going to be the mus and the sigmas and the phis and blah, blah, blah, the rest of the observed parameters. So it's like, given the linkage function, we do that again. And then what we would expect is that those parameters will not move around so much over time, and that's when it will converge, similar to the way k-means worked. Great, great question. Awesome. Please. [INAUDIBLE] have better guesses for [INAUDIBLE] Awesome. Yeah. There is not something that has such a crisp solution, to my knowledge. I don't actually know one. But that's a really good question. How do you initialize this? And in what situations can you better initialize it? It seems natural enough to think about one, but I don't know a proof for one. Yeah, it's a great question. You can post it on Ed, and I can dig around and see if anyone did that. My suspicion is yes because k-means-plus-plus was such an important thing. It's a wonderful idea. Other questions. All right, OK, so looking at this, trying to think what I should tell you. All right, so we have about nine minutes left. Let me see. All right, so what I want to do is I think I want to just tell you the steps that we're going to go through next time because I think over the weekend, this will be too much. Traditionally I teach it Monday, Wednesday. But I want-- I'm going to want to redo this, I think. That's what I think. So right now, what I want to do is have a detour into one thing that we need, which is convexity and Jensen's inequality. So I'm just going to draw the basics here. And then I'll give you a sense of what the Em algorithm looks like. Now the reason we need this is that this is a key result. And it confuses people every year. So I spend more time on it, and hopefully it confuses them less. Sometimes people say it's trivial. And then I'm very, very happy. So if you think it's trivial, then I did my job. OK? All right, so what we're going to need to do-- the reason I want to tell you this is, what we're going to have to do is we're going to have to have a mathematical abstraction of like this going back and forth and this guessing back and forth. And that's going to be basically two different functions. One function is going to be a lower bound of the function-- there's going to be the actual loss function for the totally crazy probability distribution that has all the infinite models and everything jointly together, OK, the actual likelihood function. And what we're going to do is we're going to have an approximation of it that says, given we have a particular guess of the zis, we're going to have this lower bound. And we're going to have to move between the lower bound and the upper bound in the lower bounding function, and that is going to be facilitated by something called Jensen's inequality. And it's worth understanding because it's something that you can use. So I'll just get started a little bit on it and draw some pictures, and then we'll pick it up next time. OK, the ab element of omega. And I would certainly rather answer questions because we can cut into later lectures. We don't need to rush. ab is in omega. So let me draw this. So something is convex if the line joining them-- oops-- is inside them. So if I take any two points in here, any a and b, and I draw the straight line between them. We're in Euclidean space, so it's a straight line. Then if it's convex, then no matter which points I pick, the line between them is going to remain in the set. This is a convex object, so like an ellipse or a circle or something like that. In contrast, here if I pick points a and b here, this is not convex. And these are going to be-- yeah, we're going to care about these quite a bit. So what does this mean in symbols? In symbols, we have to check for all alpha-- for all ab in omega, which is our set, omega. Lambda a plus 1 minus-- oops-- the line between them-- is an element of omega. And lambda here is an element of 0.1 That's all this picture is saying. This picture and this math are the same. And you need to check for all. Right here, clearly, if I put a here and b here, there's a line between them. The reason it's not convex is there exist one pair of a and b for which I go out of this set. Now we're going to use it when this bottom piece is a function. And we think about the function going up to infinity, the graph of the function being convex. And what this tells us is if we look at chords, that is points between the function like this, they should always be lower bounds to the function. And that would be a little bit opaque. But I feel like if you think about a function that looks like this, the x squared function, that set will be convex. That's the canonical convex function. All right, so what we're going to see is we're going to go from these definitions of convexity on sets to convexity on functions. And that's going to allow us to basically prove the following statement, which we will prove next time, which looks mysterious but is not. You'll see this is greater than f of E of x. So there's going to be-- we're going to show next time. Just I want to give a little bit of a roadmap. We're going to use this definition of a convexity to prove a theorem. This theorem is called Jensen's theorem, and it's effectively immediate from the definition. Once you understand the definition of this four functions, if f is convex, this is true. f convex. A function is convex if its graph is convex. And We'll draw that out. Once we know that-- we're going to need that for convex and concave functions-- that's going to be a key building block for what we do for the next couple of lectures. We've talked about convexity once or twice in the supervised setting, but now we're going to need it in a little bit more detail. All right, so let me wrap up and tell you what we did today, so that you remember what I want you to take away. So we started with this idea of the difference between supervised and unsupervised learning. We then went through k-means, which was a really simple, heuristic algorithm, but in a hard problem that would find these clusters. We talked about this idea that you needed to be able to model the problem to be able to solve it. You had to input either k or something. And you had to be able to check or at least visually inspect the answer. We then talked about a generalization of k-means which was called mixture of Gaussians. And the only generalization was rather than belonging to a single cluster deterministically, you try to find this probability distribution over everything. That led to a bunch of notation, but the notation still basically had this two-step procedure underneath the covers of guess the centers and then check how likely they are. The how likely they are and k-means was the distance. In GMM, it had this more complicated probabilistic model. We like that more complicated probabilistic model because it's going to allow us to model even more sophisticated notions of structure. And then I think I'm just going to go through that on Wednesday so that you have all the mathematical details there together. And we'll do a review of that. Thanks so much for your time and attention. Have a great day. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_I_Basic_concepts_in_RL_Value_iteration_Policy_iteration_I_2022_I_Lecture_17.txt | So I guess let's get started. From today we're going to talk about, just in two lectures, the topic, reinforcement learning. So reinforcement learning is a pretty important sub area of machine learning. But it does have a slightly different flavor. We're not going to spend a lot of time. We're going to have-- just because this course has covered a lot of topics, we are only going to touch on some very basic concepts of reinforcement learning this lecture and the lecture-- the two lectures after. The next lecture, we're going to have a guest lecture on the broader impact of machine learning, like robustness, societal impact, and so forth. And then we're going to have the last lecture, which is also going to be on reinforcement learning. So this lecture is here just mostly because I think you need this lecture for solving, well, homework question in your Homework 4. This lecture will give you the basic concept so that you can solve that homework question. OK. So reinforcement learning-- so I think reinforcement learning is-- at least on the surface, it's very different. It sounds very different from some of the other machine learning problems because there are a bunch of different things. So maybe, just to give you a rough sense on what kind of questions we are trying to solve today. So maybe the running example you can have is that you have some-- you are trying to let a robot learn how to navigate a certain part of some space, right? So you want to control the robot to do something. So we are trying to solve this controlling tasks. So maybe you want the robot to pick up some object or maybe on the robot to go to some place, so and so forth. So there are a bunch of differences between RL and supervised learning or unsupervised learning. So the difference is-- so here are some differences. I'm not trying to be very comprehensive. And also, there are intersections. So there are certain sub-areas of RL which is more similar to supervised learning. So here, I'm just only going to talk about some high-level differences. So the first of the-- first of all, RL is about sequential decision-making, so sequential. So I guess there are two things that you need to pay attention here. So the first thing is that this is about decision-making. So before, when you talk about supervised learning, you are talking about prediction, right? You are predicting the house price. You are predicting whether somebody has cancer or not. You are predicting some y's, some target, right? But here, you are not really just only about prediction of what will happen in this world, you are also-- and this is also about whether you should-- what kind of decisions you make. Sometimes these decisions are made after you make the predictions, but maybe before or after you know that this person might have cancer, then maybe you need to give this person some treatment, right? So that's the different part where you have to make decisions based on a prediction. Sometimes you don't, you don't predict, you just make decisions directly. That's also possible. But at the end of that, you are making decisions. And the second, this is sequential decision-making. So your decisions have a long-term impact, right? So especially, maybe think about controlling a robot. So if you let a robot to go forward at this step, then at the next step, your-- the robot, the configuration, or the robot will move. So the decision at this step does affect your decision at next step. Now, think about, for example, treating a patient. So if you give the patient some kind of pill today, then maybe the patient becomes better. In the next day, then you have to change your strategy, or, at least-- the decision you made yesterday probably would affect the decision you make today and which would also affect the decision you make tomorrow. So I think that's the two important thing about the RL. And you will see that this is why this is challenging because you have to consider the long-term consequences, or long-term consequences of a decision. It's not like you can just say, I just greedily choose a decision based on the current situation. Choose the decision that can make the next day the best. So maybe you can make a decision that makes the next day very good, but then the day after tomorrow you get into some weird situation. Think about your life decisions. Sometimes you shoot for short-term reward. You do something, but then you miss out, for example, long-term investments in some other opportunities, right? So greedy approach sometimes doesn't work very well. And in many cases, greed, just purely greedy approach doesn't work. And there are some other things about-- another thing about decision making is the following. So we all make decisions, right? So decisions can give you multiple benefits, right? So one thing is that you are required to make a decision because that's the problem of formulation, right? When you control the robot, you have to make a decision. How do you control it, where you want it to turn, whether it turns left or turn right. But another thing about decision is that when you decide what to do today, you also collect information from the environment about what the environment will look like. So decisions also affect you in terms of the information. So you can make decisions to kindo f query. In some sense, the decision's part of the job is to query the environment. So you collect more information. Maybe let's take the treating patient example, as another example. So maybe your decision could be that I'm going to measure the patient in terms of certain measurement. That's another decision. And that decision doesn't really treat the patient, but it does collect information for you so that you can treat the patients later better. So decisions also give you additional information. So sometimes you have to think about whether a decision-- you have to trade off the different effects of decisions. Sometimes the decisions just directly give you some reward. But sometimes the decisions doesn't give you a reward, but give you information so that you can use that information to get reward in the future. So I use the word reward, which I haven't really defined. So that's another actual difference between this and supervised learning. So here, there's no supervision, so no or little supervision. So what does it mean? So for example, if you think about controlling the robot, one supervision could be that someone, some human is telling you that, how should you control the robot. Some expert knows that, how you can play with this robot. But the questions we are solving here are those questions where you sometimes just don't even know what's the right way to control the robot. For example, suppose you want to fly a helicopter. So if you really fly a real helicopter there's some personnel, some training lessons. You have to be trained to be somebody who can fly the helicopter. But suppose you are developing a new helicopter that can fly automatically, then you are trying to figure out what's the decisions you have to make. Or maybe, think about treating the patient, you have to find out the sequence of treatment you should give the patients. It's not like somebody already knows it. If somebody already knows it, then probably the problem is already easy. You should just use that expert or policy. But sometimes, you are trying to figure out the best way to make decisions. So that's why there's no or very little supervision. So in some sense, you are not really trying to predict what's the best prediction. You are trying figure out the best prediction by interacting with the environment by some trials and errors just because humans sometimes don't know the optimal decisions, either. But if you don't have any supervision, how do you figure out what's the best decision? So the kind of the way to deal with is that you learn not by imitating some human supervisions, but you are trying to learn from some reward function. So what does it mean? So humans specify the reward function. So the humans specify what you want the robot to do. So if the robot picks up this object, that means success, then that's a reward function. That means reward is high. And if the robot fails to do that, the reward is low. So we specify the reward function, and then you let the machine learning algorithm to figure out how to maximize reward function. And another kind of aspect here, these are not completely mutually exclusive. So I've mentioned this. So another interesting feature is that you collect more data interactively. I think this is pretty much like what I've mentioned before. The decisions also give you more data. You can make a decision to query some information, or even if don't make decisions deliberately to query information. They'll still give you additional information once you make the decision. So because it's sequential, you make some decisions and then you get some data. And this data could help you in the next round to make better decisions. Of course, there are many different variations of the reinforcement setting where sometimes, you can assume you have more supervision from some experts, can give you a demonstration. And sometimes you don't have reward function. Sometimes you cannot have interactive collection of the data. And there are a lot of different variants. So basically, you can add the adjective before reinforcement learning to have a variant. So you can say offline reinforcement, learning. That means you don't have-- you have more supervision, but don't have the interaction. You can have other accommodations. But this is the main set of features of reinforcement. Any question so far? I know this is very abstract. Yeah. Could you go over a little lower-level supervision part again? Yeah. So basically, you don't-- so what could be a supervision here? Because you are trying to make prediction or make decisions-- one possible supervision would be that you like the expert, an expert who knows how to solve the task to demonstrate how to solve the task. So for example, suppose you want to control the robot, and you say, OK, maybe how to control this robot. And then you just demonstrate to me how do you make the robot to solve the task. And that would be your supervision. But in many cases, you don't need to have the supervision. Sometimes you don't have the supervision because the humans don't know what are the best way to control a robot. And sometimes you have the supervision, but you don't need them. Or sometimes, you have the supervision, but it's hard to collect supervision because you have to find the expert. So that's the basic idea actually. But just to be fair, recently, I think in the last two or three years, people are moving towards more and more human supervision because it turns out that there's trade-off. If you don't use supervision, you don't use human supervision, that sounds great because you don't need humans to demonstrate for you. But you need to collect more data and do have more trials and errors. You have to try out different ways to control your robots and see which ones succeed and which one fails and then learn from that. But suppose the humans tell you something, then you don't have to try that many times. You can let the robot fall down less often. Maybe without even trying on the robot, you already know how to control the robot just because you learn from the humans. So I think in the last few years, I think people are moving towards using more and more human supervision. And sometimes you are using imperfect human supervision or you use-- sometimes they don't have the expert supervision. Sometimes it could be just some data from the past. So you have seen the robot going around, doing some tasks imperfectly in the past, and you use those data as supervision to learn something better than the past data. That's even possible. But we are not going to go into all of these details. So for this lecture, there is no supervision, just-- you know nothing about what's the right decision. But you're going to figure out the decision by trying different strategies. So basically, all the algorithms will look like this, somewhat like trials and errors. So you start with a robot. You don't know how to control it. You don't know how to use it to solve any tasks. And then you try different decisions, actions. And then some actions just happens to pick up objects, for example. And some other actions just happens to fail. And then you try to use those actions that succeed. You know whether it succeed or fail because that's your reward function. If you succeed, you have better reward. And if you fail, you have lower reward. So you know whether it succeeds. And then you try to boost the chance to-- your algorithm tries to amplify or boost the chance to take those actions that can succeed in the past. And you kind of do this bootstrap thing. And eventually, you find one set of actions or one set of policies that just always succeed with very high probability. And that's how the algorithm, roughly speaking, how it works. So basically, everything is through this reward function. So the reward function is the signal you rely on to learn what is good and what is bad. Any other questions? [INAUDIBLE] what type of [INAUDIBLE] performance regarding the reward functions? So the reward is something you collect. But you can also see something more. We can see, for example, when you manipulate the robotic arm, you can see how the-- where the arm moves towards. So you can also observe other things. I'll formalize that as well. But roughly speaking, you can observe. Yeah, basically, for example, if you treat a patient, you can observe something about how the patient behaves or performs. And if you train a robot, then you can see how the robot kind of moves, right? Those are additional informations you can collect. And sometimes this information is via kind of a pixel. You can collect the information via different ways. Sometimes it's from the camera, sometimes it's from the internal recording of the system, sometimes it's from something else. Can you have multiple reward functions? Yeah, you can have multiple reward functions in many cases, but then you have to say-- you decide which one is more important, how do you balance them? So in our lecture, we are going to just have one reward function. And sometimes you can also have constraints. For example, you can say you have one reward and a bunch of constraints which you have to satisfy. That's also a valid setup. I think I probably have said a lot about the high-level idea, which is probably a little bit hard to map to the real thing. So let me try to define-- mathematically, how does this work? And I'm going to use this running example, a very trivial running example. So this running example is that you are trying to control a robot navigating a 1-D tape. So suppose you have a tape which is like this. And you have a robot which-- it sits somewhere, and it can move to the left or to the right. You can take some action to move it left or right. And maybe there's a goal somewhere else. Maybe this is the goal. So basically, you want to move this robot to the goal, which is trivial. If you're human, knows this task, the person would be able to move. So you just keep going, moving to the goal. That's easy. But we are going to let the algorithms to figure out what's the right-- the strategy. And so this is my running example. And I'm going to formulate on the set of problems. The formulation is often called Markov decision process. So let me define a bunch of quantity terminologies. So there is something called state, s. And you can have a set of states. And that's denoted by capital S. So for this example, a state is-- a state, basically, is the situation that the robot is in. So a state basically, is-- suppose you have say Actually, maybe, let me just change the goal to be here to be consistent with my notes. Maybe I say 10. So a state basically is describing what's the current situation of the robot. So here the state is probably And here, you only need 10 numbers. There are only 10 states. So that's the only thing you care about. But if you really have a real robot, maybe, you have to describe this robot in many other parameters or many other numerical numbers. For example, you may describe the robot speed, the velocity, the height, the center of mass, so and so forth. Then you can have a high-dimensional state. So you have a lot of numbers to describe the state of the robot. And the family of the states will-- basically state, in that case, will be a high-dimensional vector. And the family of states will be just all the high-dimensional vectors that can describe the robot. So maybe another example would be that, if you think about playing Go. And the state would be the current board, so where all the how does the board look like. And then you have a lot of states because there's 361 entries on the board. And every entry, you can have a white and black and nothing. You have three choices. Then, you have 3 to the power of 361 possible states. So this is the concept of state. And there is a concept called actions. Sometimes, typically, we use a for action. And there's a set of action. This is the set of actions you can take, which is called A. So in this case, basically, these are all possible things, decisions you can make, right? And it's called action, technically, in this language. So for example, here, maybe I'll just allow the-- you take two actions left and right. You can just move left or move right. And when you control a real robot, probably you can take other actions. You can change the-- you can accelerate or deaccelerate, or maybe you can change the force at different joints and many other things. And if you play Go, then the action is to really just put something on the board. OK, so this is a set of actions. And then, I need something called dynamics or transitions to describe how does the actions influence the states. So this is called dynamics. And I think also sometimes it's called transitions, or transition probabilities, or sometimes it's called state transition probabilities. So basically, this is the-- so basically, you're asking, about the question. So I guess maybe let's define a notation. So this is Psa. You're asking the question, when applying action a at state s, the probability distribution. You're asking where I should arrive at next time, so the probability distribution of the next state. And the next state, often just notation-wise it's often called S'. So basically, you're asking-- so in other words Psa(z) is the probability of S' is equal to z, given s and a. So Psa is the probability distribution. And this probability distribution is the conditional probability distribution. Conditional, you are at-- currently, you are at state s. And you play action a. And you ask yourself what's the distribution of the possible next state. And there's some randomness here, right? So for robots, sometimes you can think of it as deterministic because you do something and deterministically it moves to some other place. But for many other environments, you take some action. But how does the environment, how does the state changes? It's probabilistic. There is some randomness involved. So the randomness is trying to capture on that. So that's why in this sense this-- so for every s and a, this Psa is a distribution. So Psa, if you write Psa as this, the probability of every state-- suppose maybe, let's say-- suppose you have a state S, which is equal to 1, So then if you-- you can also view this as a vector. You say this is Psa(1). This is the chance to arrive at the state 1. And this is the Psa at the last state. Maybe let's-- I think I'll just-- I think, yeah. Nice. Maybe I'll just-- sorry. Maybe let's just-- I think I realized it. Maybe let's take this. Say M is the of states. So my state is just to have m states, 1 2, 3, 4, up to m. Then you can list the probability to arrive at each of the states. And write it as a vector. This is a vector that is in Rm. And This the probability vector. So this vector itself-- all the entries are summed up to 1. Of course, you can also have deterministic transition dynamics. So you can say that only one of these numbers is 1, and all the others are 0. And that's a deterministic transition dynamics. It just means that you just-- you always translate to the same state given the same action and state. You always translate to the same next state. OK, so I think I'm going to have an example based on this robot thing, just to give you a concrete idea. So suppose you say you have this action set, which is L and R. And this means that you push the robot to the left or to the right. But let's say suppose there's some randomness in this environment. Even when you try to push it, it doesn't necessarily always move. So with just only a good chance, it moves. With some chance, you just fail to make it move. So maybe then-- maybe let's say, suppose-- so L action, say, succeeds with a probability of, let's say, Then, that means that if you look at, or if you use this notation, it means that suppose you have-- you are at P, say, you are the state 7 and you apply the action L. So Psa, S is the state and L is the action. So then you ask, what's the chance to arrive at some other state? But you are at 7. And you try to move a left, but you know that it only succeeds with probability of 0.9. That means that with 0.9, you're going to arrive at 6. So this will be 0.9. And with 0.1 chance, you're going to arrive at 7. You stay at 7. So that's 0.1. And with 0 chance you're going to arrive at any of other places. For example, you are going to arrive at 5 with 0 chance. And then you're going arrive at other places with 0 chance. So that's one example of the transition dynamics. And then, you can write down the transition dynamics for other state or action. Maybe just, for example, suppose you say P7,R,. So now you're asking, at state 7, if I try to move to the right where it should arrive at. We know that it should arrive at 8 with some chance, with chance 0.-- say, 0.-- I guess, in my example, I make it a little bit complicated just to-- so let's say R actions succeed. I'm just making this up just to make it interesting. With probability that with probability to arrive at 8, at the entry 8. So you get 0.8. And with probability 0.2, you are going to stay at 7. And with probability to arrive at any other states. So that's mean P7,R, maybe 5, will be 0. And all the other P7,R(z) will be 0 if z is not equal to 7 and 8. And you can define whatever translations you want. So you can write-- using the same ways, you write out all the transition probabilities for every state-action path. OK, so that's my description of the environment. By the way, by environment, people generally refers to this entire system. So how does the-- how does things change based on your action? The environment basically just means this entire system. OK, so now I have defined the transition probability. And now, I have to talk about sequential decisions. So far, I've only talked about one action, about how does one action affect the system. So when you have sequential decisions, then what happens is that you are interacting with the environment or the system, like the following. So first of all, you say, I'm going to have an initial state s0, which is the initial state. And let's say the initial state is given, but sometimes you can also say the initial state is randomly drawn from some distribution. OK, then, so what you do is that the algorithm chooses some action a0 from the set, action set. And then after the algorithm choose the action, the decision or action, the environment step is that you basically sample s1 from this distribution. So you basically translate your state based on this rule that we have described. So you say S1 is going to be sampled from Ps0a0. Psa0a0 is basically, if your probability is 0 or s0, what's the chance, or what's the probability to arrive at new states? And you sample one state, one concrete state from this probability distribution. And then, the algorithm chooses a1, and in this action, a. But here, a1 can depend on what? So a1 can depend on s1 and s0. So s1 is considered as something that is given to the algorithm. So you observe s1. After you play the action, you observe more information. And the more information is s1. And then, you can play your new a1 based on s1 and s0. And then you just keep doing this. So the next round, you just say, I'm going to continue. I'm going to say that s2 is generated from Ps1a1. And then algorithm picks a2. And a2 can depend on all the history, all the historical observations you have seen. So that's the idea. So for example, if you really think about this thing. Maybe you start from you start from the state 3. And then, you can apply R action. And then with some chances you are going to arrive at 4. And then, suppose you are right at 4, then you can say, I'm going to decide again what my action should be. And you say, OK, my action should still be R, and then I can observe whether it does move. So with some chance, it will move. And you'll just keep doing this. OK, so that's describing the decision process. And there's one thing we haven't described, which is the reward. So how do we decide when you succeed or not eventually? So the reward function-- there's something called reward function. So the reward function is a function that maps the state, the family of states, a set of states to a real number. So basically, sometimes you write it as R(s). S is a state. And you apply the reward function. You get Rs. In some other cases, you can also have reward function that depends on the action. Sometimes it can depend on the next state. For the purpose of this course, let's say the reward function only depends on state. So basically, for example, for this robot case, maybe you can say-- suppose you want to somehow have a reward function that characterize whether you achieve the goal, state 6, maybe you can just define your reward function to be-- R(6) to be 1.0. So achieving a 6 gives you a high reward. And then you say R of s is, say-- I am making this up. It doesn't really matter exactly. But you say reward is very small or even inactive if the state is not 6. So suppose you were to define a reward like this, in some sense, you are encouraging the algorithm to reach 6 and not reach any other states, because for other states you get negative reward, and for the 6 you get positive reward. But this is only a reward for one step. Can you have reward plus states and action? So if let's say you want rewards-- you take a certain action that is more expensive, but if you take a certain another action it could be less expensive as well? Yeah, you can do that. So basically, yeah. That's what I said. So your reward function can be a function of s and a as well in many other cases. But for simplicity, I just said the reward is the state. The method didn't change match. It's almost the same. And sometimes, actually, it depends on the state and action at the next state. So you can depend on s prime. [INTERPOSING VOICES] OK, cool. But this is only about one step. Eventually, you have to care about the sequential decision-making. It's just not about one step, right? So the total payoff is defined to be-- sometimes it's called total return. So this is defined to be R(s0) plus R(s1), so and so forth, plus R(s)t. So this is the total reward. Basically, you just the sum of all the reward at all the steps. And now, here, I didn't specify how many steps we can have. So if have an infinite number of steps, and this doesn't seem to make a lot of sense because your reward will be sometimes going up to going to infinity-- if you have an infinite number of steps at each step you get a reward, something like 1, then your reward eventually will be infinity. And it becomes not very informative, because you cannot compare one infinity with another infinity. And there are two ways to deal with this. So one way to deal with it is that you say you have a discounted reward, but you have infinite horizon. By the way, horizon means how many steps you're going to play in this sequential game. So basically, the discounted reward is the following. So you have R(s0) plus some gamma R(s1) plus some gamma square R(s2), so and so forth. And where this gamma is less than 1 and larger than 0 is the so-called discount factor. And here, you have infinite horizon. Basically, it just means that you take sum to infinity. What if that was the horizon variable. But is it ST? This is T. But here, I still have-- I don't know. Here I'm being vague. So I'm taking the-- [INAUDIBLE] In this case, [INAUDIBLE]? Right. And this is just a demonstration. This is not the-- the total payoff doesn't really make a lot of sense if you really-- if you just do this, but you have infinite horizon, it doesn't really make a lot of sense because it's going to be infinity. So this is the real definition. We have infinite horizons. So you have discounted reward. And the reason why you want to have this discounted reward is that-- so I guess there are several ways to think about this. So one thing is that you can think of this as interest. It's like in the finance where the return you get, the reward you get in the future, doesn't reward as much as the reward you get right now because there is an interest rate or inflation rate, something like that. So basically, you say that you discount what you get in the future by a little bit exponentially. If you get some reward after t times, you're going to have-- so basically here the t's term would be like this. So you discount your rewards in the future by some factor, gamma to the power of t. Just like in finance or in economics, your return in the future doesn't reward as much. So that's one way to think about this. And another technical way to think about this is that if you do this, then you-- if you have infinite horizon, then your total reward is always bounded. So suppose you have-- wait. Did I? Oh, this was from last week. So basically, suppose your gamma is less than 1 and bigger than 0. And suppose each step of reward is bounded by minus M and M. So suppose you have this two, then that your total, your discounted reward, discounted payoff, this, is at most, say, 1 plus gamma or M. Sorry. M. The first time, you can only get M. And the second time, you get M times gamma. And the third time you get M times gamma squared, so and so forth. And this series will converge. So the total sum will be something like M over 1 minus gamma. So basically, you're guaranteed that at least you have a bounded return. And this is the maximum return you can get in the best case. So technically, this makes it possible for to reasonable infinite horizons because your return is always, at least, bounded. There's always a number, a meaningful number for the return. You have some questions? I thought so. Oh, no questions, OK. So if you look at the RL paper, in the literature, sometimes also, people talk about a finite horizon, which means that you just have a hard cutoff for how long you're going to play this. So there is another way to formalize the problem, which we are not going to talk about in this lecture. But I'm just going to tell you the existence of such a definition. So if you have finite horizon, you just say you have a horizon, which is called T. This is basically saying that you just have to stop at T step. You'll just have to stop at T step. And then your reward will just be-- you don't have to have discount factors. You just say I'm going to-- I adopt the first T steps. So this finite horizon thing, and the infinite horizon, they don't have fundamental differences from a technical point of view. If you know how to solve one, you basically know how to solve the other. Of course, there are some dependencies if you really care about a theory. So here, your-- all your dependencies depends on gamma. And here, your dependence will be on T. But fundamentally, they don't really matter that much. But the infinite horizon case is a little bit easier to understand in terms of-- at least, for the beginning, if you just derive the math, the math is cleaner. So that's why we do the infinite horizon case. And by the way, the gamma, typically, in particular, people do use gamma, do use this discount factor. And gamma is something like probably 0.99. And sometimes, in extreme case, people even do 0.999. So if you are in this regime, in some sense, the way you think about it is the following. So let's say gamma is 0.99. What does this really mean? It means that you have to really take a T, a power that is all over 100, to make this gamma to the power T to be somewhat different from 1. So basically, 0.99 to the power 10th, this is still pretty close to 1. This is probably-- I think this is-- how to do the math here if you really care about this? So it's 1 minus epsilon y to the power t. This is close to 1 minus epsilon t or something like this, at least for the-- when t is small. So if you really do it-- so if you do the calculation, so this to the power-- to the 10, this is still close to 1. This is probably like 0.9. But only if you raise the power to a higher-- so you raise the power to 100, then you're going to have this. I think this is 1/e, something like that. It's like 0.2-- [INTERPOSING VOICES] Yeah, exactly. So basically, this is saying that the power has to be large enough so that this discount factor starts to matter. When the power is only 10, it doesn't really matter. And what exactly is power has to be, I think if you do some powerful math, I think what happens is that-- sorry, I'm just reusing these parts. So 1 minus-- so gamma to the power of 1 over 1 minus gamma. This is something like So only your power becomes then you start to see the effect of the discount factor. And after that it decays pretty fast. So in some sense, if you really want to of have a way to transit between finite horizons and infinite horizons, then passcode this, 1 minus 1 over gamma, is your effective horizon lines in some sense because after-- when t is much, much bigger than this 1 minus 1 over gamma, then the discount factor just starts to be super small. So then, you don't even have to care about those kind of steps. OK, so I guess, finally, after using 6 boards, I already-- I'll defined this MDP. So this whole thing is called MDP. So this whole formulation is called the MDP, Markov decision process. And you can see that this Markov decision process are defined by a bunch of concepts. So one thing is the set of states. Another set is-- another thing is the set of actions, and the set of transition probabilities, Psa, s in S, a in A. And you'll have a discount factor. And you'll have a reward function. So basically, after you specify these six things, or five things, then you specify MDP. And that's a well-formatted problem. And the goal of the MDP is that-- maybe I'll just write the goal here even though it's pretty-- so the goal here is to maximize the-- so basically, you want to find out a way to maximize the discounted payoff, let's say. So basically, given MDP, you want to figure out how to maximize this discounted payoff. Just to clarify [INAUDIBLE] range, basically, [INAUDIBLE]?? The payoff I think means the sum of the reward, discounted sum of the reward. And reward basically means one step. I think people sometimes also call it the sum, the summation version. Sometimes they call it return. And occasionally, also, people just call it reward. But I think I like to have a differentiation of the terms just to make it not too confusing. OK, any questions about the formulation? So far, we didn't really do any derivations. So just to clarify-- so all of these are definitions, how do you view this? It's like a world view. How do you view this world, in some sense? What's the most important concepts and what are the goals? I'm running a little bit slow, but it's fine. So now the next question is how do we solve this problem, how do we find out the best action. And for starters, we're going to assume that these Psa are given. So you know transition probability. You are just trying to find out what's the best actions you should take to maximize the reward. But in reality, you don't know necessarily know the Psa. You don't know the transition probability. You have to somehow learn them from observations. So I think for this course, we don't really talk too much about how to learn the Psas. So in some sense, you can mostly assume that the Psa-- they are given. And the only question is to figure out what's the best action to take. So the first thing to realize is that there is the so-called Markov property. So I didn't emphasize that, but let me do that right now. So the Markov property means that when you take the environment, advance the state by changing the state, it only look at the previous action and the previous state. So the environment, how does the environment change the state? It only depends on previous state and the previous actions. That's described in this Psa, this framework where the Psa, the next state only depends on previous state and previous actions. So in some sense in your state there is a Markov property. So you only have to look at the previous state to decide what's the state. You don't have to look at the entire history. So because of this Markov property, it means that it means that you only have to-- when you make the decisions, you only have to look at the immediate state, the current state. So the optimal decision at time t, the optimal-- maybe let's call it action just to be consistent with the terminologies. The optimal action at time t only depends on the state St. Because anyway, whatever you do-- basically, after you see St, you can forget about the history because the history doesn't really matter. Conditional St, where the history is independent with the future, conditional St. After you see St, you know everything about the current configuration. So you don't have to use the St to predict, to make decisions, and to maximize your reward. So basically, because of this, there's this concept called policy. So policy is a function that takes in a state and output an action. So I guess, technically, I'm going to say this is my state to action, so it's action is equal to the policy applied on a state. So basically, you only have to look for policies instead of looking for the entire trajectory of actions, because anyway the way you make decisions is that you look at the immediate current state and then you apply some function on the state to get your action. So basically, the unknown thing becomes a policy instead of a sequence of states. Does it make some sense? Isn't this at the beginning, you said you were trying to avoid a greedy algorithm. And this is really greedy that you just looking at the current state and set of actions over review right now? That's a great question. So the question was that whether you do this, it sounds like it's greedy. So it's not in the following sense. So when you decide what policy we use, you do have to think about the long-term ramifications of your action. But it's only that-- but your action doesn't depend on-- I think this is more about you don't have to care about the history. So you don't have to care about the previous history. You just have about what currently-- condition of today. So what happens today is all that matters. But I can forget about what happens. I don't care about how did I arrive at this situation. So in some sense-- how do I say it? For example, if you have a robot. So if the robot already dropped the bottle, you don't care about how the robot dropped the bottle. You just care about, the bottle now is on the ground. I have to go pick it up. So this is more about forgetting the history. But when you decide the policy, you do have to think about the future. You'll see that when we optimize the policy, when we find the best policy, we do think for the future a lot. Yeah, I'll go back to that as well. Cool. So this is the first thing. So you only have to find a policy. But this policy is not something that's trivial to find because this policy is a function. So you have to figure out what's the best action for every state. So basically, for every state you have to figure out the best action. And that will give you the so-called policy. So maybe one way to think about it is that here, suppose you want to move your robot to the goal, then the policy problem would be that if the state is on the left of the goal, your policy should be taking the right action. And if the state is here, you should take the left action. And, by the way, the policy can also be randomized. Here, I'm looking at a deterministic action. So you say-- you have a state, you output a single action. For every state, you have a single action. That's called the optimal policy. But in many cases, the policy could be a randomized function, or could be something like conditional state, S. You can output the distribution of actions, and you choose the randomly from them. For the purpose of this course, I think we are not going to have-- at least for this lecture, I'm not going to have random policy. For the next lecture, I think I'm going to talk about randomized policy. So now I'm going to-- so how do I find the policy? So this sounds a little easier than finding a sequence of actions. But how do I do that? So let me introduce another notion which is called the value function. So let's say you have-- this is a value function V pi. This is a value function of a policy. This is a function that maps the state to a real number R. In some sense, this is trying to capture the value of the state under this policy, pi. So basically, this is equals to-- Vpi(s) is defined to be, in words, the total payoff obtained-- of the executing policy pi starting from state S. So basically, you think-- you start from state S. And you keep executing your policy pi just every time iteratively. Every time you see a new state, you apply your policy pi. And then you collect some total payoff, total discounted payoff. I always have discounts just to-- for the rest of lecture. So I compute the total payoff. And I call that the value of the state S. This is demonstrating how good this state S is under this policy pi because if the state S is good, then you have-- it's a property of the both pi and S. So if the policy is good, you get better payoff. If the S is good-- if the S is probably at a goal, then you probably get better payoff because you don't have to move anything. So this is just-- go ahead. A question about this [INAUDIBLE] policy that you're talking about here in the value functions. So when you say that you start the policy from S, is that the policy that is being restricted to a certain state or what you're saying is that that policy that you've taken action and then it goes to [INAUDIBLE] that's three more candidates, and then you provide new function for each of them more [INAUDIBLE]? So what's-- OK. Do you expand? Do you just [INAUDIBLE] also this calculation? Yeah, so maybe let me-- yeah, I think that's a good question. Maybe let me just define it. Yeah, this is just a intuition so far. So what does this really mean? It's that you-- so you start with s. s0 is equal to s. And then you say sa-- a0 is equal to pi of s0. And then you say s1 is equal to-- is sampled from Ps0,a0. So you start with s. And then you take action, a, according to the policy pi. And then you say the environment-- take a step. And the environment's draw the next state, given s0 and a0. And then you play a1th according to the observation s1, pi of s1, so and so forth. You play this game. And then you say-- you look at the reward that you've accumulated throughout this process, and then to infinity. And you say-- you take this expectation. Conditional s0 is equals to s. And this is definition of the value function. The value function is the-- this is the expected total payoff, expected discounted total payoff, of this game. The game is-- the processes that you start with s. And then you play this policy. And the environment does what it should do. And you always play this policy. So that's the value of this status. I have a question. How do you select the s1 and s P basically, all the future states? [INAUDIBLE] the current states? All the future states are selected by the environment based on s0-- so the s1 is selected by s0 and a0. OK. And s2 is selected by-- if you have-- maybe let's just continue here. I was having an issue with that if you include all the future states that are possible after a0 or you're just going through them all? I'm sampling as well from-- Oh, you're sampling. I'm sampling from the environment. So if the environment is deterministic, I'm just going to talk-- it's just fixed. The environment decides it. If there is a-- if the environment is random, then you sample one from it. But eventually, you take average over all the possible-- so basically, you're simulating a world with all the possibilities. But, of course, each possibility has different-- each possible future has different probabilities, some more likely to show up, some less likely to show up. And then, you look at the rewards for every possible reward of the future. And then, you take average over all the possible rewards. So all? Do you take the average of all possible rewards for sS1? No, for all of them, s0, Rs1, Rs2, and so on. So basically, just you apply this policy. And you try this policy in the real world. And you can-- the real world is stochastic, something random may happen, and-- but you just try this out, and you collect all the reward and you take expectation of the reward, the total reward. [INAUDIBLE] This is a definition I'm trying to give you. How you really do it, that's a different question. OK. This is the definition of the concept. So why I'm defining this, I'm defining this because-- there are two reasons to define this. Well, one reason is that this is capturing the value of the state S. If the state S is good, that means that this state is a good initial state. If you start with the state, you can accumulate more rewards. And also this describes the goodness or the quality of the policy. If the policy is good, then this value would be higher. If the policy is-- just keep doing the right thing, you get more and more rewards. So in some sense, you can say that, actually, the problem is really just trying to figure out what's the right pi that can optimize your value function. So you can reformulate a problem. Before we were trying to find the sequence of action to maximize the reward. So now, I think we can just change our perspective of saying that we are trying to find out the policy such that I can optimize my value function. So basically, your new goal is that you are maximizing over all policies. You maximize this Vpi(s0). So here I'm assuming s0 is deterministic, is given. OK, maybe I should just-- yeah, so suppose you are given some s0, so that the question is s0 is given, and you are just basically maximizing-- you are trying to find all the best policies such that the value of s0 is maximized under this policy. So basically, in some sense, if you can see, if you can find out what this function is, Vpi(S0), for every pi, if you can know-- for every policy, suppose you can figure out this number, then you can just enumerate over all policies and see which one has the best of return, total return. So of course, that might not be an efficient algorithm, but conceptually that's what we're trying to do. So we are trying to figure out what's the reward for every policy, and then you try to pick the best policy. So basically computing this Vpi-- computing this is called-- sometimes it's called policy evaluation. You are evaluating how good this policy is. And once we know how to do the policy evaluation, then you can try to do the policy maximization to maximize over the policy. So first thing I'm going answer is how do you do the policy evaluation. So how do you do it? It turns out that, basically, you just have to do a recursion. So to do policy evaluation, you just do some recursion. So what does that mean? So I guess, suppose they think about the policy at state s-- so let's just use the definition. The definition is that, basically, you start from state s and then-- the definition is that you say this is the expectation of this the total payoff, discounted payoff. I'm assuming that you with state s0 is equal to s. And because s0 is equal to s-- so you just say this is equal to R(s), this is the reward you get in the first step, and then you say-- we have some discount factor. You pull one gamma off because all the other terms has one gamma there. And then you say this is R(s1) plus gamma R(s2) plus gamma square R(s3), so and so forth. Note that I pulled one gamma out. So before it was gamma square R(s2), but now it becomes only one gamma. And if you look at the rest of this term, this is something that is actually something you have, we have-- this is also meaningful in the sense that this term is really just the total payoff obtained from starting at s1. Basically, this is just that you start with s1 and you apply this policy iteratively, and what's the payoff you should get without the gamma. The gamma is a factor, so without the gamma. This is basically the total payoff if you start with s1. So that means that this quantity is really, literally, just the Vpi(S1). So that means that you've got a recursion, in some sense, between V and Vpil-- between Vs and Vs1. So maybe just more formally-- so I can write this as this. So I think maybe technically, I should say that, without expectations, this pi is equal to-- I guess, let me not to be too technical here. But you can probably see what I mean. So here, Vpi(s1) is basically the reward you get from executing from s1. But why are we still having an expectation here? This is because s1 is also random. It's not like s1 is deterministically determined. s1 is drawn from applying-- is drawing from this environment. So s1 has this distribution. So s1 is drawn from Ps0,a0. A0 is pi of s0. So this is P of sS0, pi of s0. Any question so far? So maybe, just to be more explicit. Basically, this is equals to R(s) plus-- if you write out this expectation, you can write it as a sum. So basically, you draw s1 from this discussion. That means that you-- it means that you just say, for every possible s1, in a set s, you look at the density of s1. This is the chance that you see s1 in the next step. And then times the Vpi(s1). That's just how I expanded it, the definition of expectation. I think maybe people, typically, when sometimes-- it doesn't really matter what variables I use for s1. I can use any variables. So maybe I'll just use s' just to be consistent with the technical notations. So basically, you loop over all possible s', all possible next states. And you first say I'm looking at what's the chance to arrive at that state, s', and then I multiply that with the value function s'. So this is just the equivalent to this expectation above. OK, so why this useful-- so first of all, let me say this is called Bellman equation. This is a equation about the value of function V pi. And it's often called Bellman equation. And also, this is-- technically, I think, if you really want to have a-- sometimes this is called Bellman equation for V pi because there is going to be another Bellman equation, which has exactly the same name. People also call it Bellman equation for some other quantities. And we will define it in a moment. So this Bellman equation-- why it is useful. It's useful because this is a linear function in VpiS. So you can think of it as-- maybe I'll all use here. So you can think of Vpi(1) up to Vpi(1). Recall that M is the number of states, I defined. So you can think of this, all of these, as the variables, as M variables. And the Bellman equation gives M equations about these variables. Why there are M equations-- because for every s-- this is true for every s, for every s it has the equation that involves these variables, pi of-- Vpi(s) and Vpi(s') in a linear way. So the Bellman equation is the states of linear system equations in this variable Vpi(1) up to Vpi(M) So to figure out what is Vpi(I), you just have to solve the system equations. I'm not sure whether this make-- is this too abstract? So in some sense, maybe, it just give you a concrete example. So for this concrete example here, if you write out this Bellman question, what will happen is the following. So you probably-- for the concrete example, you have maybe something like a V pi. I'm just plugging in some concrete thing, say, maybe 6. So you're trying to figure out what's the equation for Vpi(6). If you think about the equation, your first off thing is the R(6). This is the reward you get in the first step. And then you-- times gamma times the reward you are getting in the future. So where you have the sum, sum S' E S Psa, something like S', Vpi(S'). And you plug in all of these numbers here. And then you get an equation, which depends on Vpi(6) and all of the other Vpi(S'). But this is a linear equation. So think of this is a variable, all of these as variables. This is an equation with a bunch of variables, but they are linear in those variables. And you can write this for everything. You can say this is R(5) plus dot, dot. And you have the system equations. Each equation is a linear equation in the variables. So basically, this means that you're just computing Vpi(S) for every S by some linear equation solver, by solving the linear equations. And because they are linear, you can use the efficient solver like just-- how do you solve linear equations? You can, I guess, one way to do it is do some inverse of the matrix. Maybe the other way is to do a linear equation. But that's a sub-module that you can just invoke. You can invoke some off-the-shelf algorithm to solve the linear equation. Any questions? Is it possible there are infinite solutions? Can you say it again? Is it possible there are infinite solutions [INAUDIBLE]?? Is it possible that they are infinite of solutions? So I think, in this case, it's just not possible. Why it's not possible is probably not super obvious to see. In some sense, you have-- at least it passes the trivialness check. But if you count how many equations, or how many variables, they are exactly the same. So typically, you probably should have a reasonable-- you have a unique solution. And I think, in this case, you can prove that there is a unique solution just because this set of equations has some special properties, but maybe let's not get into that too much. OK, great. So we know how to-- So basically, we know how to evaluate Vpi(S). In this equation, Psa would be different depending on the initial state? Uh-hum. Can you say it again? The Psa-- This one? And the probability? This one? That would be different depending on the initial state? No, this Psa(s'), this P is the dynamics. It's the transition probability, which is global, which is what you-- it's a given property of the MDP. For example, if you subtract this first state question-- so the second one would be just R0 6 minus [INAUDIBLE]?? If you subtract these two? Yeah. Oh, no, because-- Oh, right. That's a good point. Sorry. I think this s and a, I think I'll only partially replace this. This will be 6. Yeah. Oh, is that what you're asking? Yeah. OK, cool. Sure. So I think this will be-- that's a great question. So this should be And then, if you write this, then you will have gamma 5 pi of 5 S', Vpi(S'). So all the coefficients are different for different equations-- different lines. Yeah, thanks. OK, cool. So we have completed the Vpi. But the Vpi-- But the next question is how do you-- so basically, we have solved the policy evaluation. Now we need to figure out how do we maximize the policy part. How do we find out the best policy? So let me find out some places. It turns out that you can use a similar technique to find out the best policy. So here is some definitions. So first of all, let's define V star S to be max pi Vpi(S). So what does this mean? This means that you are looking at all the possible policies that you can use. Starting from s. You look at-- basically you try all different policies starting from s. And you ask which policy give me the best reward of total payoff. And the value of that total payoff will be the V stars. So V stars is the intrinsic value of state s. So we're saying that the state s-- just how valuable this state s is, right? And how valuable is measured by using the best possible policy. Vpi(s) depends on both the pi and S. If you use a bad policy, Vpi(S) may be low. But v star S is just saying that, what's the value of this state if you use the best possible policy in the future steps? And then you can also define the so-called pi star. This is the so-called optimal policy. This is it goes to the arg max over pi Vpi(s). This is asking what is-- basically, you are just doing exactly the arg max of the previous statement. So the best policy that achieves the maximizer is defined to be the PI star. This is the optimal policy we are trying to find out. So with these two annotations I think I can-- so I'm going to first find out what the V star s. And then the pi star s will be relatively easy to do, because, as you'll see-- so the first question I want to answer is how do we find out what's the V star of s? What's the intrinsic value of each of the states? So it turns out that you can do a similar type of Bellman equation as the V pi, but just the whole thing involves a lot of max operators. So here is what I mean. So if you think about the V star s, again, you're trying to get a recursion for V star s because V star s is complicated because it depends on all the future states. So you want it to have a recursion. So I'm going to be a little bit-- the exact math here. I think if you want to make it rigorous you have to justify more formally, but I'm not going to deal with that. So I'm going to be slightly sloppy here just for simplicity. So you take an arg max over pi R(S) plus gamma. So this, here, I'm using the Bellman equation I just derived. So this is the Bellman equation I just derived because Vpi(s) is equal to this, right? And so, let's think about this. So first of all, you are maximizing over pi. So this one doesn't depend on pi even. So we can just take it out. So we'll just say this is R(s) plus-- So now you look at this thing. So pi shows-- you want to maximize this. And pi shows up in several different places. pi shows up here and pi shows up here. So the pi shows up here in the sense that if you do use different pi, you're going to arrive-- you're going to have a different transition probability so that you can arrive at different set of state s' with different probabilities. And this pi here is trying to capture what happens after this step. So after we already see S', what's the future reward? This V(s') is basically telling you, if you have s', what's the future possible payoff after s'? So I'm going to-- so I'm going to be a little sloppy here. But let's say, suppose we optimize this first Vpi(S) first, this occurrence of pi(S). So if I choose the pi such that this is the best, I mean what you should do-- so what you should do is you should try to figure out, you should try to make the pi(S) give you the best action. So basically, what I'm saying is that this is equals to-- maybe let me write it down, and then it's easier to explain what I want to prove. So basically, I'm saying that I'm going to choose the pi(s) to be the action a that maximizes this. So pi(s) is some action. So I'll just write a here and then say I tried to choose the best a such that, such that the pi(s) is equal to a. And I still want to maximize this. However, the pi affects two things. pi also affect what happens in the next-- so then, I need to also try to make sure the pi makes the future steps the biggest. So and then, the nice thing about this is that this term is something already defined, which I can-- it's a recursion. This is just V star s'. So basically, if you look at the final equation, it's like you are trying to say-- you are trying to try all the possible action, a's, you take at this step, right? That's why you take max. So you try all possible a's, and use-- Oh, I guess-- sorry. My bad. s, a. For any possible a's you have a probability to arrive at s'. And then you say that after you arrive at S', after that you use the optimal policy, V star S', starting from that. And this is the-- basically, this part is the reward, the best reward you can get if you apply action a at this step. Basically, if you apply action a at this step, what's the reward that you can get? You're going to have some transition probabilities to arrive at s'. And then after you arrive at s', you'll have some maximum possible reward V star s'. S'. So that's why the sum is the best possible reward you can get if you apply action a at this step. And then, you max over a. And that's the best thing you can do. So I guess it's similar to earlier when you finding out the entire policy now to find the optimal policy, again, we should eliminate variables. We solve for all of those to get the optimal policy? Yeah. OK. So that's the next step. That's a good question. But maybe just any other questions before we move on to the next step? OK, so great. So you get the equation. And let's see what equations. For every S, you have an question. And so, you have the same number of variables and the same number of equations, the m variables and m equations. But the problem is that now the equations are not linear anymore. [INAUDIBLE] So they are not linear equations. So you still have non-linear equations and it involves m variables. And you have m of these equations. So that's the challenge. So there is no off-the-shelf solver you can use to solve the set of equations that are nonlinear. So that's why we need to introduce this so-called value iteration. So how do we solve these equations? So we solve the equations by the so-called value iteration? So first of all, let's just think of this V star. Let's just define our notation V star to be V star 1 up to V star m. This is the vector in IRm. So I view this function as a vector. Because I only have m possible inputs, I can view this function as a vector of dimension m. And I'm going to say that-- and then my equations can be written as this. So it's V star equals to 1, is equals to something like-- maybe let's just say, each of these equations like this, R(1) plus some max, something like this. And V star as 2 is equals to R(2) plus max, something like this. That's my system equations that I-- I have so many equations. Each equations look like this. And I can abstractly write this as the following. So I can abstractly write this as this whole vector. Let's call it V star. And the right-hand side is something that involves V star again. So you call this whole thing B of V star. So basically, this is just a function of V star. So I'm just abstractly writing it as a function of V star, which gives you a vector. This is also vector. So then, if I do this, then, basically, my equation can be written as V star is equal to B of V star. I'm not doing anything deep. It's just rewriting the thing with a very abstract notation. So this is my-- the form of my equation. V star is going to B of V star. And B of V star is the right-hand side of the Bellman equation. So what I'm going to do is that-- the algorithm is very simple to find out the equations. So this is taking inspiration from the so-called fixed point problem in math. If you haven't heard of it, that don't matter-- it doesn't matter. Don't worry. It doesn't matter. But roughly speaking, the thing is that you think of this B as some operation. So you say that this V star is a fixed point of this operation. You apply this operation of V star, you arrive at the same thing. So that's the connection to the so-called fixed point problem. But if you don't know the connection, basically, somehow, there is some theory math which says that if you want to solve this fixed point problem, you just have to iterate until it converges. So what does that mean? That really just means that you have this so-called value, iteration. So what you do is you say you initialize some V in this dimension, R to the m. Maybe you can just do V is 0. I think that's fine. We just initialize somewhere randomly or maybe just initalize to zero. And then you just have a for loop. So you have a loop such that at every time you say v is updated to B B of V. And you just keep iterating. And there's a guarantee that you will converge to the fixed point. The fixed point will satisfy, v, is equal to b of V. And that's the V star. And this updated really just means what? This estimate just really means that you say V(s) is equals to R(s) plus, basically the max right-hand side of the Bellman equation. So this is what really means when you really implement algorithm, you say, you compute the right-hand side of the equation with the hypothetical V, right? And then, you give this value to the new value of v, where you update a new value of v by the right-hand side of the Bellman equation. Here I'm using this means that I compute this value, and then I give this value to the VS. I'm going to guarantee that Yes so I think you can guarantee that there's a unique one. And you can convert to it in a certain amount of time. I think that's the homework question. So to do that, you have to. I think the homework question has some hints on it. So you basically compare the distance between this V and the true V, and you can see that the distance between this V is working V with the true V star is kind of shrinking iteratively. I think I'm running quite-- Yeah, so there's one other algorithm, which is called policy iteration, which is very similar. But I think I just leave that to you all for reading the lecture notes. It's basically it's also not required for homework. So just optionally, you can read it if you're interested. Thanks. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Naive_Bayes_Laplace_Smoothing_I_2022_I_Lecture_6.txt | So I guess last time we talked about Gaussian discriminant analysis. I'll have a very brief review and talk about some of the remaining points that I didn't get time to mention last time. And then I'm going to move on to another case about spam filtering, where we're going to have discrete x instead of continuous x. So last time, we talked about Gaussian discriminant analysis. And the general idea is that you first model p of x given y, and p of y. And what we do is that we say p of x given y is a Gaussian for y is 1 or y is 0. I guess I need to remember to write bigger. So x given y is equal to 0 is from some Gaussian distribution with mu 0 and covariance sigma. And x given y is 1 is from some Gaussian distribution mu And we recall that we have this illustrative situations where you have some examples like this. These are positive examples. There are some negative examples. And you kind of believe that each of this subpopulation come from a Gaussian distribution, with different means, but with the same covariance. So that's the methodology we have. That's the starting point we have last time. And then, what we do is that we say, after you have this probabilistic model, we also have this p of y is 1 is equal to phi. After you have the probabilistic model, then you can learn the probabilistic model from data by MLE. So we learn by maximize. We take the arg max over our parameters, phi, mu 0, mu 1, and sigma. And we take the arg max of the log likelihood. It's the same as the likelihood. Arg max of the log likelihood is the same as the max of the likelihood. So I'm just writing the log likelihood, which is the log of the probability of xi given yi, plus the sum of the log of the probability y. Under, I guess technically under this parameter, phi, mu 1, mu 2, sigma. I don't have to write phi here, just because it doesn't depend on phi. OK. So that's the methodology. And then we skipped a lot of steps, because these are homework questions. And we told-- and I told you that the solution of this MLE problem, you can analytically solve it in this case, because this objective function is nice enough, it's kind of like a quadratic function. So you can solve the maximization problem analytically, and you get some formula for these quantities. So the formula for phi was something like something of this form. Of this indicator function, which is here-- basically, the numerator here is the number of positive examples. And divided by the total number of examples. And we also have formulas for mu 1 and the mu 2. So mu 1 was something like the average of xi in the positive group. In the positive examples. So you take the. Average you divide by the total number of positive examples. And you can also have mu 0, and also sigma. So each of these has a formula, which is a formula that depends on the data. So this is how you learn the parameters from the data. And then we talk about within these parameters, how do you do the test? How do you do the inference? How do you do the prediction? Like we already learned these parameters. Now you are given an example x. How do you predict y using the parameters and given the x? So and we said, that you are trying to compute the arg max over the two choice of y's. And you want to look at which choice of y give you the largest probability, given x. And given the parameter. And these parameters are the solutions you have computed from these formulas, right? So you compute which one-- oh, sorry. My bad. This is y. You compute which one has the largest probability. And also, we discussed that to know this, actually, you just have to, in some sense, you have a decision boundary. So the decision boundary between the two choices is those cases where is those axes where these two probabilities are exactly the same. So where the p of y is 1 given x is equal to p of y is 0, given x is equal to 1/2. This is the decision boundary. And we have computed the decision boundary, which turns out to be something like a linear function. So decision boundary turns out that this p of y is equal to 1 given x. This is equal to-- this is equal to something like-- I guess maybe I'll just directly write out the decision boundary. We found out that the decision boundary is the set of x such that theta transpose x plus theta 0 is equal to 0. Where theta and theta 0 are functions of the parameters that you have learned. There are some specific formulas to describe this theta and theta 0, which are the homework questions. But theta and theta of phi, mu 0, mu 1, and sigma. So that's why when you really make the prediction, what you do is you say-- you find out this decision boundary. This is the family of x, such that theta transpose x plus theta 0 is equal to 0. And then if this quantity is bigger than 0, then you would say it's positive. And if this point is less than 0, you say it's negative. This is a very quick five minutes review of the last lecture. Very, very quick. Any questions? What would be the decision boundary or actually would there be a decision boundary if we have multiple labels? Yeah, there will be a decision boundary for multiple labels. And you basically just compute the decision boundary using this methodology. You're going to get-- So how about [INAUDIBLE]? Can you say it again? Like [INAUDIBLE]. What would be the decision boundary [INAUDIBLE]?? For multiple labels? Yeah. I think if you have the same covariance, the decision boundary is still linear. But if you have different models, you may have different types of decision boundaries. So when it's going to be equal to a half, do we use the same [INAUDIBLE]? It wouldn't be a half, because you're going to have multiple choices y's here, right? So your decision boundary will be like-- I think it's going to be more complicated. So you have to really decide when-- suppose this is like 0, So then you really have to just really literally solve this, right? You have to know-- what is the region such that the maximizer of this is equal to, say, label two? So that's going to be something that describes by some linear region, some kind of linear boundaries. But it's going to be more complicated. Yeah. That's a great question. Any question? Can the boundary be quadratic? For-- it could be, depending on what probabilistic model you define, right? So I think if sigma are all the same, the capital sigma all the same, I think it's going to be linear. And if you have different sigma, you have two classes, but just sigma 1 and sigma 2, then I know as a fact it's quadratic. I saw some other questions as well. Any questions are welcome. OK. So this is just to give you a quick overview, a quick review of what we did last time. And you will see that our new-- when we discuss the new problem, we are going to have a similar methodology. You define a probabilistic model. You solve the MLE. You get some formula. And then you get some parameters, and then use those parameters to predict what's the most likely y's. OK. So but before we're going to do that, I'm going to discuss one important thing, which many people actually asked in the last lecture, at the end of the last lecture. They are great questions. So the question I'm going to discuss next is that why this is different from what we have seen in the first two weeks, right? So at the end of that, you have a decision boundary, which is linear, right? So basically, at least superficially, it sounds like you are using a linear model to decide-- you are going to use this linear function to decide whether it's positive or negative. You compare this with 0, and if it's-- if logarithm 0 is positive. Otherwise, it's negative. And this is the same as the logistic regression that Chris talked about in the first two weeks. So why this are-- why this is different from the so-called discriminative methods that Chris talked about? So I think here is the way I think about this. So if you have GDA on the left, and suppose you have a logistic regression on the right. So first of all, in terms of the assumption they are different, so for GDA, I guess I wrote the assumption there. So maybe I just wrote here again. So this is Gaussian. And this is also Gaussian. Something like this. And you also have y, which is from Bernoulli. And when you do logistic regression, then you just literally say that p of y is equal to 1 given x. Your probabilistic model assumes that this has the form 1 over 1 plus e to the minus theta transpose x. And you recall that here, when you write this, in places we are saying that x0 is 1. That's Chris assumption like in the first two weeks. But suppose you don't have that assumption. You're going to have another-- suppose x doesn't contain x0. Then you need to have another set of 0, something like this. But if you contain the x0, then you are going to have a clean form. But they are the same, right? Because if suppose I say, for today, let's say x, you drop that convention. So x doesn't contain x0. Then you're going to write like this, right? And this exactly match the form here. So basically, you can see that one thing is that, for GDA, you assume this bunch of things. Aand you found out that it implies that y given x has this form. Recall that maybe I didn't write this. But this is equal to 1 over transpose x plus theta 0, right? So let's say, how do I write this? So p of y is equal So this is a conclusion that we got from the probabilistic modeling, from our mathematical derivation. We conclude that y given x should have this form. And in logistic regression, you just directly assume it. So in GDA, that's a conclusion, in some sense. Right? And I guess we have to stress this many times. So here, you model the joint probability distribution. Because given-- if you know x given y, and you know y, then you also have the density of x comma y, right? But here, you only always have y given x. But you never have anything about x. You only have the conditional probability. So in some sense, the main difference between these two is that on the left hand side, you have more assumptions. So I think this is the key. You have stronger assumptions on the left-hand side than on the right-hand side. And in some sense, the general-- the general kind of like a phenomenal general principle about how do you model what part of the world you want to model in your machine learning algorithm, right? So do you want to model both? Or you want to model only y given x, right? So how do you decide how much you want to model the world. So the pros and cons are-- the trade off is that if you have more assumptions, then typically if this is also correct assumption, if your assumption is correct, then this pretty much implies you have better performance. Of course, this is a not mathematical statement. But I think it's kind of reasonably intuitive, because if you have more assumptions, it means that you are using your prior knowledge about the world. So here, you are using a priori knowledge that x given y is Gaussian. And if you use that prior knowledge typically if you use it correctly, then you should have better performance. Like if I tell you everything, then of course you have better performance. If I tell you a little more-- suppose I tell you everything about mu and sigma, then of course your performance is the best. But if you tell you a little more about x, then you should always have a little better performance. But the problem is with this is that-- so this is the good thing about more assumptions. But the risk is that you may have wrong assumptions, right? You might make a wrong or approximate or kind of not exactly correct assumptions, right? So if you make wrong assumptions, then in most of the cases, probably GDA will be worse off. For example, what if the data are not really Gaussian, right? So what if this doesn't look Gaussian at all, and you still make assumption that they are Gaussian. Then you are going to have a worse performance. And another thing I want to stress is that even though, superficially, you see the form here is the same, right? So theta transpose. So when you make the GDA assumptions, you get this y given x of this form. But suppose you go through this left hand side and get this, and you numerically compute, so theta. Theta and theta 0. It wouldn't be the same theta and theta from logistic regression. So you're going to get a different side of like theta and theta. makes a difference. So this theta and theta 0, from logistic regression, is how do you get it? You just directly affect your linear model using logistic regression. That's how you got theta and theta 0. But if you go from here to here, then you're going to first learn the mu, the sigma, the mu 1, mu 2, mu 0, mu 1, sigma. And then you compute theta, using mu 0, mu 1, sigma. So that will result in a different set of theta and theta 0. That's why it does make a difference. And whether it's better or not, I think, as I said, basically it depends on whether assumptions are correct, or how correct your assumptions are. Probably, you never can have exactly correct assumption. But maybe our assumption, when your are assumption is approximately correct, then you should do GDA. If your assumption is just completely wrong, then you probably should do logistic regression. Any questions so far? Well, [INAUDIBLE] because [INAUDIBLE].. You said that you have-- like we have a different design in multiple cases. So how would this [INAUDIBLE] different? Is it because we are computing in that equation [INAUDIBLE].. Are they like-- I guess that's the formatting [INAUDIBLE] we end up not retrieving all the parameters, right, but we're not gaining all those [INAUDIBLE].. Right. When you do GDA, you are not training on theta directly. You are training on mus and sigma's directly, and using those formulas. And then you compute theta as a function of mu and sigma, right? So it's definitely a different process, right? You are not getting the theta and theta 0 in these two cases using the same process, right? So in one process, it's kind of circuitious, right? You first have to compute mu and then theta. In the other case, you just directly fit this using a numerical algorithm. Like gradient descent. [INAUDIBLE] is it possible that we can get a different equation for [INAUDIBLE] applied given x, if we have let's say, different assumptions? Definitely. You may have-- if you change your assumption, you may have a different equation for p of y given x. And there's actually an interesting point. I'm going to mention, maybe after I answer. Are there any other questions? So then, there's actually another interesting point I'm going to mention next is that even you change the-- actually, it's possible you change your assumption, you still have the same formula on the right hand side. So you can have both. If you change the assumption, maybe you still have the same formula. Maybe you still have a different formula. So this is a complex case, which I think what I said is actually even more surprising. So you change the assumption on the left hand side, you still have the same y given x on the right hand side. That is the same form. So this is an example. So I guess here I'm looking at, I think one dimensional. So suppose x is one dimensional. It's not very important like exactly. So suppose you do x given y is distribution. How do I-- Poisson distribution. Sorry. And then you do this. And I misspelled this word, sorry. But you do this. So it's no longer a Gaussian. It is some other distribution. And I guess x is-- because this is a Poisson distribution, so it has to be integer. Something like an integer. And then you still have y is p of y is 1, is equal to phi. So we change our assumption. And then, this actually still implies that p of y given x has this form, 1 plus e to the minus theta transpose x plus theta 0. So still, the form looks similar. Of course, if you really numerically compute this, it would be a different theta than theta 0. Because theta and theta 0, here I'm just writing them as a generic variable, right? But actually, they have meanings, right? Theta is a function of lambda 1, lambda 0. And theta 0 is also a function of lambda 1 and lambda 0. So the form is still the same. But you could have, numerically, if you use this model instead of GDA, you are going to get different theta and theta 0. So a linear form doesn't necessarily mean everything, right? It also depends on how you learn this linear function. So actually here, we have three ways to learn it. We can use this model. We can use the GDA. We can use the logistic regression. They will all give you different theta and theta And then which one will be better, I think if you compare GDA with this Poisson version of the generative learning algorithm, then I guess the answer would be that probably it depends most on whether which assumption is more correct. More likely to be correct. Of course, there is also some type thing here, because here, your thing is like your N. But suppose you have a different model, which also deal with like r. Basically, I'm saying that when you compare it-- forget about this axis here, anything. Suppose you don't care about that differences. Then which model will work better probably depends on whether your assumption is correct or not. Or which assumption is more likely to be correct. And when you compare the generative algorithm with the discriminant one, I think it's the same thing, right? So if your generative assumption is likely to be correct, then you should gain something from it. Otherwise, maybe you should just use logistic regression. I think in some sense, like these states, if you look at-- and also, another thing is that, OK. And also, maybe a little more general discussion is that in some sense, your model has two sorts of like-- so in some sense, they are-- in some sense, you have two sources of knowledge. The model learns from two things, in some sense. So one thing is that your assumption. And the other thing is you have data, right? So the assumptions means like how do I probabilistically model all of these quantities? And data is really just what you see, right? If you have more data of course it's good right, if you have more assumptions. If the assumption is correct then that's good. But on the other hand, suppose that, for example, you have already a lot of data. Then you have less need to use prior knowledge, because your data already are very telling, right? So like the data is sufficient for you to extract whatever information you want. Then you don't really need to use prior knowledge. Because if you use prior knowledge, you always have a risk to use it wrongly. So basically, in some sense I think the modern trend is that we're going to talk about neural networks and deep learning in two lectures. The modern trend is that now we're in this setting that we have more and more data for many applications. And then, that's why the modern techniques, like deep learning networks, those techniques use fewer and fewer assumptions about our data. So just because it's not really worth it, right? So I have to think about how do I model my images, right? Suppose you apply this to images. You have to have a model for x, for the image. Is that really worth the effort to do that? Probably not. Of course, it depends on cases. But if you just want the first kind of results, you don't have to model your x, because modeling x is very difficult. It's very challenging. And you may make mistakes. So you probably should just directly go for the more kind of like discriminative analysis type of approach, right? So you just directly model y given x if x is an image, right? But in some other applications, for example, if you think about medical applications, or some of the other applications of machine learning, where you don't have enough data-- in those cases, I think, still, you have to use as much prior knowledge as possible. And actually, many times, people even do even much more complicated modeling of your x. So maybe you can use some other more advanced ways to model your x, because you know how does each coordinate of x, what each coordinate of x means, and what's their relationship. And you put all of this prior knowledge into modeling for x. And then you get finally some y given x x using this machinery. And then you predict. And that's more likely to work. So yeah. I guess that's-- and another thing is that if you use more assumptions, then you are specializing to your application. That's another reason probably why in the modern time, like people somehow don't do this kind of-- use these prior notations often, because you guys probably heard of neural networks, and one of the kind of magic about it is that it works for many, many cases without much customization, right? If you use prior assumptions, you use more assumptions, you are-- you have to use different assumptions for different applications. Because for different applications, their data will probably have different structures. So you have to do it one by one. And that's actually what people do a lot of times, right? So they look at their domain, their questions, and study the structure. But these days, you know, like as you see, like when people use neural networks, because you have enough data and you just drop assumptions, and you make a model kind of very general, not specialized at all, and then you supply to everything just without thinking much about what the data look like. So yeah, so I guess that's-- we will talk more about neural networks. But this is the a preview or kind of connections to what we're going to talk about next. And another kind of a big picture is that one of the reason, as I said, in some sense, I was saying that this kind of GDA analysis is not used that often, at least not used as often as before, right? Before, you have to use these kind of things. And now you can-- at least you have the choice to try something like neural networks, because you have more data. But still, I think if you are able to model the x, in many cases you can do better. And also, in some cases you just don't have y. So even, for example, when you have images, right? So sometimes you don't have y at all. You just only have x. And in those cases, you just really have to model x, because that's the only place where you can get information from it. And another possibility, another case is we're going to talk about languages, especially like this lecture, I'm going to talk about language as well. But later, we're also going to talk about languages, solving language problems with neural networks. And there, you are just getting kind of like a lot of text from the internet, right? And there's no labels. There is nobody's telling you like which web page is about which, right? So you just have raw text. We only have x. And in those cases, you really have to model x. There is no way you can get around this. So still, like modeling x is very important. It's just like, for example, it's less important for certain kind of applications because we have more data. Any questions? OK, cool. So I guess, now, I'm going to move on to the next question. So I think the question is the so-called spam classification. So you are trying to understand whether a piece of text is a spam email or not, right? So you have an email spam filter, and you want to know whether your email is a spam or not. And we are still going to do generative learning algorithm. And we are going to have discrete x. So this is another example of how you do this generative learning algorithm. You model x given y, and you execute this like a pipeline in some sense, and learn something out of it. So the first thing. So I'm going to get into all of the more details. How do we really approach this question. So the first thing you probably have to-- I have one quick question. With GDA, in the spam filter, generative [INAUDIBLE] so there are more. There are many, many more. These are just examples. Right. So I guess-- so the first question, first question we have to do to approach this is that how do you represent a text, right? Text are symbols, like ABCD, right? So you need to make a numerical, at least, to make the computer recognize them. In some naive way, at least, right? So the first question is, how do you change the text to some x? Maybe you call this feature, this is-- you can call this feature vector. Or representation or something like that. So you want to change this to x in some dimension d. And then you model x. So the first question is, how do you represent text? So there are many ways to represent text. So the way I'm going to tell you is a very naive way. This is like-- probably I wouldn't say naive. But this is a very simple way. In this states we are going to have-- if you really deal with the text, you are going to use more advanced to deep learning based approach, which we are going to cover a little bit in probably three or five weeks. So this way, so here the way that we do it is very simple. So what you do is that-- maybe I should have some-- So what you do is you say you have-- you first look at the vocabulary. Suppose you have a vocabulary of maybe 10K words. And suppose you say, you list all of these words in a sequence, based on alphabetical order. I guess if you open up a dictionary, the first letter-- the first word is probably always A. I think the second, according to some dictionary, the second word is this word aardvark. I think it's a kind of animal. And the third one is aardwolf. I think it's another kind of animal. Something like this. And then you list all the words. And maybe, at some point, you can have a word book. And eventually, the last word, I think, in mind of the dictionary, is this thing I don't even know how to-- I don't even remember how to pronounce it. I think I used to know when the first time I teach this lecture, and then I forgot after a few years. Anyway, so you listed all of this. And then you say that you have-- suppose you have a piece of text, right? So maybe say, suppose you have a sentence, or maybe an email. Suppose this email just has one sentence, something like I buy a book. So you want to turn this into a vector. And how do you turn it into a vector? So the way that we do it here is that you turn it to a vector that is of dimension-- its x is of dimension d. Remember, I said d is the size of the vocabulary. d is equal to the number of words. And then, and this vector that represents this piece of the sentence is going to be a zero-one vector. So x is actually-- it's really from zero-one to the power d There are only two choices. And every entry-- so you have so many entries. And what you do is you say, if this word shows up in the sentence, then you have entry 1 in this entry. So the word "a" shows up in the sentence, then I have 1 here. And aardvark doesn't show up in that sentence, I have 0. And then at some point you have this corresponding entry book. Book shows up in the sentence. I have 1 here. And maybe somewhere, I think there is a word probably "I" here in the list. And then that one would also have a 1. That word also corresponds to 1. And for all the other entries that-- all the other words that don't show up in the sentence, you just fill in 0 for the corresponding entries. And you call this x your representation or your feature vector for this email, for this sentence. So basically, technically, I'm just going to say that xi is equal to 1 if and only if the i-th word occurs in an email. So there are actually many other ways to encode, even before using deep learning techniques. But this is probably one of the simplest one. And you can see that this representation of the sentence is, in some sense, like a super, I guess like simplistic, because for example, you don't care about the orders of the word, right? Suppose you have another sentence, "I a book buy," right? That sentence still has the same representation. Exactly the same. And it doesn't care about the frequency of the word. For example, suppose I have a sentence "I buy a book and a pencil," then of course the representation will change, but these two words, "a" and "a" here, will be connected-- it will still have a coordinate 1 here, because the word "a" shows up in this sentence. But you don't care about how many times the word "a" shows up in the sentence. So you don't care about the frequency of the words in the sentence. You just care about whether each word shows up. And there are many other probability issues with this, like what else? Yeah, I guess probably these two are the most important thing. Where you don't have the order, and you don't care about the frequency. But that's what we kind of deal with, because this is easy, and you can somewhat kind of do all of the math with this kind of model. Now, what's the next question? The next question is that we need to build a generative model for x and y. And we do have a model for y. And then we do MLE. And we solve the parameter, so on so forth. So basically, I think I can just erase this. And now, I'm going to redefine this three things. So that's what I'm going to do. So let's take-- and how do you proceed? So now, because this x null is a binary vector, it's only taking 0 and 1. So you need a distribution that can generate binary vectors, right? So you cannot use Gaussian here. And the kind of the techniques that we are using here is that so-called naive Bayes. So what does this mean is that-- this means that you just assume x1 up to xd. Or maybe I'll erase this for now. Are independent conditioned on y. So given y, you just independently draw x1 up to xd. Of course, this is not realistic, right? This is definitely not exact. How realistic it is, I think that's subjective. But at least, this is not exactly how people generate emails, right? You are not saying that I'm going to generate spam filter. At first, I'm going to generate a spamming email. And the first thing I decide is I decide y. And then after I said y, I just start stringing words randomly. That's probably not how people write spam emails. And also, that's not how people write the usual like good emails. But it turns out that many, many, many cases, these kind of assumptions are pretty-- already pretty good. So in the homework, actually, we haven't decided whether we are going to include that homework question. But at least, there are cases where you can see this kind of things can be very effective, right? Actually, if you just really use this model, even you make this kind of crazy assumption, and you learn some parameter and use this model to classify spams, I think you're going to get more than 90% accuracy. Maybe these days, you know, as of like 2012, maybe it's not that effective because all the spammers, they are adversarial. So they know what your prediction algorithm is. They can change their algorithm to kind of fool you. But at least, this is a reasonable one if you go back 10 years ago. For sure, right? So it's kind of interesting, right? So even you make-- all this obviously kind of not exactly the right assumption, but sometimes you can still, because the assumption probably is somewhat correct to some extent, you can still get a useful kind of outcome from it. And for our purposes, I think, here I guess to some extent, I'm not really that-- I don't care that much about the assumptions. It's more like I'm trying to demonstrate the methodology. Like how do you-- I just want to give a new example where you execute this probabilistic model, this flow, this pipeline, right, and show how to solve them. [INAUDIBLE] One example like for one. Right. So the question is, how does the text-- the length of the text matter, right? So here, so one interesting about this is that in this representation, the length of the text doesn't really matter. You can encode any length into a vector of dimension D. Which is you can say this is a good thing or bad thing. So basically, you can encode any length, any sentence, or any documents into a single vector. And how do you decide what is the window size? Here, I don't think it matters that much. Maybe you just take entire email. Of course, if you change the-- I think you should just include the entire email, because that's the unit you are working with, right? Like for every email, you are classifying whether it's a spam or not. You are not classifying whether a sentence is of spam or not. So that's why you treat email as a single example. x1 through xd are independent conditioned on y. Does that mean that x1, like one x appears, that tells us nothing about whether another x is likely to appear? Or does it mean that all x's are equally likely to appear when y is zero? I think it's is more about-- it's more the first case. So because I do-- like certainly not all the words are equally likely to appear, right? So basically, I'm assuming that given y, given you have decided that whether this is spam email or not, every word is-- they are independent with each other. But they may have different kind of probabilities. So I guess let me proceed. So what does this really mean? OK, now I'm going to do some math, so to kind of expand this and parameterize it. So this really means that you have this p of x1 given xd. Given y, you write this as p of x1 given y times p of xd given y. And now, I just need to parameterize each of this probability distribution by some parameter. And if you think about this, what is this? This is really just the distribution, a Bernoulli distribution. Because xi can only have two choices, 0, 1. So basically, you just have to describe this probability by two numbers, right? Actually, by one number, right? So basically, what you do is that you parameterize parameters of the model is you have phi, say j, So there is some index, which I'm going to explain in a moment. This is xj is equal to So basically, for y is equal to 1, you're asking, what's the probability of xj is 1. And you denote that probability as this. And then, once you have this, you know the probability of xj is equals to 0, given y is 1, is going to be 1 minus phi, j given y is 1. And this thing is just a notation. It's not like it's equally the same if I write j comma 1. It's just like I write this subscript because it's a little bit more intuitive. But I just need an index, in some sense. That makes sense? So basically, for every j, I'm going to have a parameter. For every j and 1, I'm going to have a parameter that's called phi, and its parameter is in 0, 1, right? And this parameter describes this distribution. And for y is 0, I'm also going to have a parameter. So I'm going to write this as j given y is 0. So this is the parameter for the distribution of xj given y is 0. [INAUDIBLE] Yes, it's between 0 and 1. [INAUDIBLE] This is between-- this is the bracket-- the hard bracket. So basically, I'm saying that if I-- with this parameter and this parameter, I can describe all of this. I can write out all of these numbers. Because I have all the quantities, right? Because for example, what is p of xj given xj is 0 given y is 0. This is going to be equal to 1 minus phi j y is 0. And then, I also have, if I have like something for this, to parameterize the distribution of y. We call that-- we call this phi before, in the GDA case. And now, just for the sake of distinguishing it from the other phi's, I'm going to call it phi y. But this serves as the same row as phi in before, in the GDA case. Any questions? And then how do I proceed? I'm going to write the likelihood and maximize the likelihood. All right. So the likelihood is the probability of seeing a data given your parameters, right? So likelihood is a function of the parameters. And this is the probability of the data, given the parameters. So what are the parameters? The parameters are phi y is one of the parameters. And also, all of these phi's. Phi j given y is 0, and phi j given y is 1. So basically, I have phi d, given y is 0. And then phi 1 given y is These are all your parameters. And my likelihood here, there are two ways I can expand this. So the first thing is that, because by definition the likelihood is the product of the likelihood of each of the example, because all your examples are independent. This is not naive Bayes yet. This is the examples are independent. So you can just write this as probability of xi, yi, given all the parameters. And here, if you are careful, then you know-- OK, so basically, all the-- I guess all the parameters I'm going to write, I think in my notes the notation is this. But I think this really just means-- this is just a convenience notation that denotes all of the parameters. This is the same as this. And then you-- first, you-- so so far, it's the same as the GDA. And then you can also do the chain rule to make this xi given yi conditional parameters. Given the parameters, and then times p of y, given the parameters. So when I use dot, dot, dot, I just mean all the parameters. Of course, sometimes you can drop some parameters, because they are not-- they don't matter. And then, I'm going to, again, factorize in the dimension of x. Now I'm going to use my G-- Like this Naive Bayes assumption, which is the factorization across the coordinates of x. Where you call that this is x sub i is the coordinate of x. The superscript i is the i-th example. So what I'm going to do is that I'm going to use that assumption for each of the examples. So what I'm going to get is i from 1 to n. I'm going to put this in front, just to make a simple-- make it easier. And if I'm careful, I only have to write phi y here, because the distribution of y only depends on phi y. And then I'm going to factorize this thing across the coordinates. So I'm going to have a product across the coordinates, have D coordinates, and then have p of x sub j, i, is the j-th coordinate of the i-th example, given the label for the yi-th example and all the parameters phi jy. Phi jy just means-- I guess, maybe I will just-- so phi jy, this just means a shorthand for this family of parameters. Does it make sense? Maybe, I would just-- sorry, this is bad notation. So phi jy just means a shorthand for this collection of parameters. So for this case there's only two parameters, right? The j [INAUDIBLE] and [INAUDIBLE].. So you mean here? Yeah. Yep, that's right. If you only look at this j and this j, yes, you only care about the-- you just care about the j and the fixed j, and the y is 0 and y is 1. You said earlier you are not [INAUDIBLE].. So we have two times that we factorize these probabilities. So the first time is here. So here, I'm using the fact that all the examples are independent examples. So that's why I say the joint probability is the overall examples is the product of the probabilities of each of the example. And now, for every example, of course, I first do the chain rule to get x given y. And then I factorize this one into this product again. And this is the label of the Naive Bayes. This is using Naive Bayes. Cool. And then, so I guess you can expect what we're going to do. We are going to maximize this. And we know that if you maximize this, it's the same as maximize-- suppose let's called this L. So maximize L. This is the same as arg max of log L. And log L is going to be a sum of the log of this probability. So log L will be a sum-- you turn all of this to sum, and you have to log in front of the terms. So you have log p yi and dy, plus here you have a double sum. Sum over i from 1 to L. Sum over j from 1 to D. And the log of this. And then, you analytically plug in all of this. So for example, for this one, you can-- you know what is this, right? This is equal to-- you know what is this? This is equal to phi y if yi is 1. This is equal to 1 minus phi y if y is equal to 0. For each of this, you can write them as a formula of all of the data and the parameters. And then you do the maximum likelihood. So the maximum likelihood, I guess I'll also just tell you the solution. So if you look at a gradient of L is 0, this is the-- right, this is a sufficient. It's a necessary condition for you are being-- for you to be a maximizer. I guess, technically I should write nabla phi y. And you compute this gradient, and you solve this equation. This is your family of equations. And then this solving it gives some formulas for the parameters you are recovering. So the final solution will look like this. Phi y is equal to-- over n. This is pretty intuitive. This is the fraction of positive examples. Actually, it's the same formula as we have seen for the GDA. And then phi jy is 1. This is the probability to see xj is 1 given y is 1. It turns out that this is also something simple. So you look at-- maybe let me write down the formula, and then interpret it. So what is this numerator? This is the number of occurrences of i-th word, right? It's only 1 when the i-th word-- the i-th example contains the j-th word. So the j-th word-- this is x ji is 1 means that j-th word shows up in i-th example. And you also require that the i-th example is positive example. So basically, the sum is the total number of occurrences of j-th word in positive examples. And this is the number of positive examples. So you can see that even though we have done a lot of calculation and modeling, at the end of day, the formula is pretty intuitive. You are in some sense just counting-- it's kind of like something about counting. You're doing some statistics. You're counting how many times the j-th word shows up in positive examples. And you divide that by how many total positive examples there are. So for example, suppose like the word book shows up in 10 positive examples, and they are like a million positive examples. That means this is Which kind of means that book doesn't seem to have much correlation with positive examples, if it is-- this number is smaller. If this number is small, it means that it's unlikely to see the j-th word given positive example. So when it's unlikely-- it's unlikely because-- partly because the data, they don't show up. And for the negative examples, it's the same. So you just write-- it's kind of symmetric. So phi of j given y is 0 is equal to something like-- so you can guess what it is, right? Basically just change every-- the value of y to 0. Any question? So basically, we are done with this. At least we are almost done. There's one small thing that we have to address. But in terms of the i-th parameter, we are done. And we got a parameter. And at a prediction time, as before, right, so we will do the prediction. As usual, you want to compute p of y is 1 given x. And if this number is larger than 0.5, then you say this is a positive example. If this number is less than 0.5, you say this is a negative example. So we have to compute this. And how do we compute this? This is, again, I think our general methodology use the Bayes rule. And divide by p of x. So this is still the same. But there is one small caveat here, which is that what if you have a 0 divided by 0 situation? So before, we have Gaussian. The density, no matter what you do, the density is always non-zero at every places. Even though sometimes it could be very small. The density could be very small. But still, the px in all of these quantities are all positive, strictly positive. At least you are going to get a ratio here. But here, there might be some cases where your p of x is just literally zero. And why that can happen-- I guess, maybe let me just give you an example. So you may think that some example just never shows up, just because of some-- I guess, let me show the example of cases. So suppose, maybe let's say suppose-- so suppose maybe the word aardvark never appears in training set. And I'm claiming that this would mean that-- this would mean that if you have a-- but your test example contain it. So let's say test example is called x, contains it. And I'm going to claim that p of x will be considered as 0. Why? I guess, let's somewhat similar this algorithm, and see what the phi will-- we're going to compute from this. So aardvark is the second word, right? So j is 2. So you know x to j, x to i is 0 for every i. This is a mathematical translation of aardvark never shows up in the training set. For every example, the second word never shows up. That's why the x to i is 0. And when the x to i is into this formula that tries to estimate this parameter-- suppose let's try to ask me the parameter phi to y is 1, right? This parameter intuitively means that how likely the second word will show up in a positive example. And recall that this formula is really about the total number of occurrences that i-th word shows up, and divided by the total number of positive examples. So this will be 0, because no occurrences of the second word. Divided by the total number of positive examples. And this will be 0. That's still fun. So far, it's not a problem. So 0 divided by a positive number, that's fine. And phi to y0 is the same. It's 0 over total number of negative examples. This is also 0. So basically, according to your estimate, MLE estimate, this word aardvark just cannot show up at all, right? Which makes sense, because it didn't show up-- it didn't show up in the training set. In your estimate, you also say it shouldn't show up at all in this under this set of parameters. But that's a problem, because now, if you compute p of x, for example x that does contain the second word, then you're going to write this as p of x. So how do you compute this? You use the total law of probability. You say this is equal to the case when y is 1, plus the case when y is 0. And of course, this is a positive number, this is a positive number. That's fine. But this one is equal to-- maybe I'll just write it-- this one is equal to the-- using Naive Bayes-- this is equal to xj given y is 1. And you have sum-- product over j from j to D. And now, I know that my x2 is 1. Because this word does show up in this email. And that means that p-- you're going to have p of x2 equals to 1, given y is 1 as the second term in this product, right? So you're going to have this term shows up in this product. But this one is equal to phi So just because of the second word, according to your model, it's not supposed to show up. But it does show up. That means that this example just does have zero probability in your probabilistic model. So that's why this is zero. And for the same reason, so why this one is going to be equals to y is 0. And also, you have this term p of x2 equals to 1 given y is 0. And recall that under a probabilistic model you learned, you just think this word cannot show up at all. The chance is zero. So that's why the chance to see this word shows up is zero. So this is zero. So that's why this is also zero. So because this zero terms are eventually, I guess if you-- because there's a zero here, there's a zero here, so you get zero just eventually. So basically, you conclude that this example isn't supposed to show up. Like this example has zero probability under this probabilistic model that you learned. And then, that's a problem, because now this p of x is zero. This is zero. So you have something divided by zero. Actually, this is also zero, if you think about it. Because this is just the one term. The numerator is just one term in the decomposition of px. The numerator is just this term. And px is the sum of the two terms. So both of these, the numerator and denominator, they are both zero. And you have the zero divided by zero situation. And what do you do? So this is an issue. And this is a reasonably realistic issue, because sometimes, you just see a new word in your test example. You haven't seen this work at all in the training set. So the way we deal with this is so-called Laplacian smoothing. In some sense, this as a way to introduce a little bit prior, so that you don't 100% trust your training set. So basically, what we are seeing here is that you trust the training set just 100%, like religiously in some sense. Like you haven't seen any word, the word aardvark in the training set. Because of that, you trust it. You just say, this word, this shouldn't show up at all. That's why if you see this-- if this word is in this x, right, so then the probability of x is just zero. So what we are going to do is we're going to say, just maybe we shouldn't trust the data exactly. We are going to allow any new word to show up a little bit, with some small chance. And in some sense, this is a local adjustment by using some prior knowledge, which is called Laplace smoothing. So I guess just to-- the best way to describe this method is start with something abstract. For the moment, let's forget about that for the moment and just think about the abstract question. Suppose you have-- so a simple example. Maybe you can call this example or abstraction in some sense. So suppose you think about your estimate, the bias of a coin. Suppose you have a coin. This coin is biased. It's not 50/50. So how do you know the bias of the coin? In mathematical language, it really means that you have a random variable z, which is from Bernoulli, where with a parameter phi. And this phi is something unknown. It's not a half. And you want to know what the phi is. And you want to know it by looking at some data, right? So we draw a few copies from this distribution. And we want to know what's phi? You want to estimate phi by using the data. And you want to solve this problem, right? This is still a probabilistic model. We can still do the same thing where you can write out a probabilistic-- you can write out the MLE, and you can write out-- you can maximize the MLE, right? So maybe let's try to do this. So suppose you have maybe n trials. And let's call this z1, z2, and zn. And each of these is either 1 or Maybe I guess in my notes it's called tail and heads. So I'll just call it tail and heads. Something like this. And you want to estimate what phi is. And if you follow our general principle, right, we will try to write out the likelihood. And the likelihood, what is the likelihood? So the likelihood of phi, right, of the parameter phi, is the chance to see this data set given the parameter phi. So what's the chance to see this data set? The chance to see the first one is 1 minus phi. The chance to see the second one is one, I guess tail means-- I think I need to define tail means what? Tail means 0, let's say. And heads means 1. So the chance to-- and you have the probability to see 1 is phi, and probability to see 0 is 1 minus phi. So that's why the probability to see the first example is 1 minus phi, and you have 1 minus phi for the second example. And you might have a bunch. And then you multiply phi at the end. This is the likelihood. And if you organize this, you count how many 1 minus phi there are, how many phi's there are. This will be 1 minus phi to the power of the number of tails, and times phi to power of heads. Actually, if you really look at this example, you will see that the variables are very, very similar. Actually, this example is a toy case for that, in some sense. So you get this. And then you can take the arg max of this like log likelihood. You take the arg max of the log likelihood. So the arg max-- maybe let's call this i of phi. So the arg max of log of i of phi, if you solve it, you are going to get the following. So it's going to be the number of heads over the number of heads plus the number of tails. Which also makes sense because this is basically the empirical frequency of seeing the heads. The empirical, I just mean like the-- the frequency that you see the heads in the training set. And that's your most likely estimate for phi. Right. So far, everything seems to make sense, right? But now, let's consider a somewhat extreme case. So what if you see-- all the examples you see are tails, right? And you don't have a lot of-- suppose you just have three draws, z1, z2, and z3. And they are all tail. So according to this formula, you are going to get the phi, the best estimate of phi is 0 over 0 plus 3, which is equal to 0. But do you really trust this, right? Do you really trust that this coin just never give you a head? What does this mean? This means that this coin is never give you a head at all, forever, right? Should you really trust that? This is a little bit subjective. If you really, really trust the data, you'll probably you can say yes. But maybe you have some prior knowledge that most of the coins are not that crazy. So you probably-- at least have some chance to see the head with some chance. So in some sense, you can-- so basically, on the flip side, you can say, OK, maybe this is just some coincidence. It just happens that you see three tails. Even this phi is a half, right? Seeing this example, seeing this case is-- the probability to see this case is at least 1 over 8, right? It's something like 1 over 8. So maybe it's just coincidence of the data set. So maybe you shouldn't trust the data set that much. So the so-called Laplace smoothing is a way to, in some sense, incorporate the prior so that you say, look, my coin shouldn't be extreme, shouldn't be too extreme like this. So basically, the Laplace smoothing refers to the following estimator. So if you use Laplace smoothing, you're going to have a different formula. Your phi will be equal to the number of heads, plus 1 over the number of heads, plus the number of tails, plus 2. So this formula, you know, I didn't tell you-- and it's like a mathematical justification. Actually, if you really look in the there they are mathematical justifications. So far, I'm just telling you the formula. But it would solve this problem, right? Like at least to some extent. Because for this particular case, you are going to get And 4 for the denominator. So instead of getting 0, you're getting 1 over 4. So still, you say, OK, the chance to see the head is pretty small, right? It's smaller than the chance to see the tail. But you don't have a very extreme estimate. And you can see that this Laplace smoothing is most useful when you have not enough data, right? If you have a lot of data, then this Laplace smoothing is not doing much. Maybe let me give you a kind of a example. For example, suppose you have a big data, where the number of heads, let's say, I guess I'm making up this, right? So 100. And number of tails is 60. So if you use the standard approach, if you use the standard approach, or maybe I shouldn't call-- like the vanilla approach. Then this is like the-- phi would be equals to 60 over 60 plus 100, right? This is-- what is this? This is like-- I don't know it's something like this. And then if you use the Laplace smoothing, then this phi will be 60 plus which will be 61 over 162. This is 60 over 160. And these two are just those very similar. Just because the one, whatever you change here, 1, 2, right? It doesn't really matter that much, because the dominating term is the 60 and 100. And sometimes, you achieve like-- you achieve a kind of balance. If you have enough data, then your prior-- or like your Laplace smoothing is not doing much. It doesn't really change much. You trust your data. And if you don't have enough data, then Laplace smoothing will try to make it not too extreme. Will try to make your estimate not too extreme. So now, let's go back to our problem. So let me see, where is the best place to write this. I think I erased the-- oh, I erased the formula. But I will write it again. So going back to our spam filtering thing. I guess, actually, there's one thing that is left here. This is our estimate, right? And our other estimate was something like phi j y is 1 is equal to-- am I writing the superscript correctly? I think this is i, my bad. Right. So if you apply this to our case, so you can recall that in this case, it's kind of like you are saying that the numerator is the number of times this word shows up in the positive example. And the denominator is like the total number of positive examples. So in some sense, the analogy is that this denominator is very similar to the number of heads and tails. Because the heads means that this word shows up. The tails means this word doesn't show up. And here, the denominator is like the total number of times-- the total number of examples, positive examples. And here, the numerator is kind of like the heads. So basically, heads means that the word show up in positive example. And tails means that the word doesn't show up in a positive example. So that's why this is like heads, number of heads. And this is number of heads and plus number of tails. And if you use Laplace smoothing, then you add 1 to the denominator, and you add-- to the numerator, and you add 2 to the denominator. So that's the Laplace smoothing. And the same thing for the other formula. So you're going to add 1 and add 2 here. So that's the Laplace smoothing. And while this solves our problem, this solves our problem because now we just never found-- we just never estimate any parameter phi to be exactly 0, right? So recall that when we have this aardvark issue, right, so the issue was that these two parameters for the aardvark was exactly zero. Yeah. Let me ask. Do we do the same for [INAUDIBLE] though it's really frequent [INAUDIBLE]. That's a fantastic question. So the question was that whether you want to do the same Laplace smoothing for the phi y. Phi y was the single scalar to describe the probability of y-- to describe the probability of each of the class. And you are right. So suppose one class just never shows up, right? And you only have negative class or positive class. Then you probably should use Laplace smoothing for that. But I think this is a little bit less important, because in most of the data set, you have to see positive examples and negative examples. You have to see a reasonable number of them. Maybe 50 of positive, And then Laplace smoothing will not really matter that much. You can still use it, but it wouldn't matter that much. So going back to this. So recall that our problem was that when you have the parameters, you get this zero, right? You thought aardvark shouldn't show up at all. You get this very extreme estimate. And now, when you add this 1 and 2 here, so what is going to happen is that instead of having 0 over the number of positive examples, you got 0 plus 1 over this plus 2. Yeah. I don't have a different color, sorry. So I have to just modify. I just only have black pens. So you get this. And then this will be at least no longer-- so this will be still a pretty small number. This is 1 over the number of positive example. Plus 2. But at least, this is bigger than 0. And the same thing for this, right? So you're going to get plus 1. And this plus 2. And this is bigger than 0. So if both of these are bigger than 0, then when you evaluate this formula, you are not going to evaluate to 0. So our p of x will be some positive number. So then you can get a number of the p of y given x. And maybe just a very small extension. We have two minutes, two or three minutes so if you have more than two classes, you can-- this is just for interest. So suppose you have maybe a dice, something like that. Instead of like a coin. So suppose you have z, which is from something like 0, So you have k choices. Then the Laplace-- the general Laplace smoothing would be something like if you don't do Laplace smoothing, what you're going to have is that you estimate this to be something like the number of times zi is equal to j, over the total number of examples. So you count how many examples. Kind of end up to the choice j. And you divide by the total of examples. And if you use Laplace smoothing, you're going to add 1 to the top, and you add k to the bottom. Where k is the number of choices. So this is just a small extension of Laplace smoothing. No, I don't think this course will use it again. But for interest. OK cool. I guess that's pretty much all for today. Any questions? OK, great. Yeah, in the next lecture I guess we're going to talk about kernel method, and then we're going to talk with deep learning. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Introduction_I_2022_I_Lecture_1.txt | So I am Tony Ma. This quarter, we are going to have two instructors-- me and Chris. I am Tony Ma, I work on machine learning, machine learning theory, including the theory for different topics in machine learning, reinforced learning, repetition learning, supervised learning, so on and so forth. I guess I would like Chris to say something about him. Whatever he wants to say. Yeah. I'm Chris. I'm also in the machine learning group. I'm really interested in how systems we build are changing with machine learning. It's been a really interesting time for the last 10 years. Started a lot on optimization, how we scale up these big models. That was when machine learning had very few applications in our lives around you. Over the last couple of years, we've built things that, hopefully, some of you in this room have used. My students contributed to things like Search and Gmail and Assistants and other places there. And more recently, really interested in how to make these models robust. And we'll have a great new lecture that Tengyu is going to give about what are called foundation models or these large, self-supervised models that are kind of all the rage. Percy and Tatsu and I cotaught a course about them last term. And this course is really exciting because it's giving you kind of that absolutely foundational layer of machine learning that all that stuff is built on. So this is a great time to study it because it's no longer abstract. You get to use machine learning products every day. And hopefully, you'll get some insight into how they actually work and why there's still so much research to do. So really excited and looking forward to lecturing you folks. Great. I guess you'll see me and Chris alternate every few weeks. Next lecture, you'll see Chris. And then after two or three weeks, you're going to see me. So, to this lecture, I guess I'm going to-- okay, I guess the first thing, the second thing is that, let me introduce the teaching team. So we're going to have 12 fantastic TAs and one head TA and a course coordinator. So is the head TA, and is going to be the course coordinator. They will be, probably, doing most of the works behind the scenes. You probably don't, necessarily, have to interact with them very often. So they are organizing the whole TA team. And we have currently 12 TAs. Probably, we're going to have more if we have more enrollments. And I guess I didn't ask the TAs to show up in the first lecture just because I guess they also have to wear masks. And maybe the pictures serve the same need. But I guess you'll see them pretty often in office hours and different scenarios. Cool. So this lecture I'm going to spend the first, probably, the logistics, some of the basic kind of like the structure of the course, so on and so forth. And then, I'm going to introduce, at a high level, the topics that is covered by this course. So I guess we tried very hard to make everything available online, like in a single doc, on a single website. So we have this course website, which has links to a few different Google Docs. One of them is about all the logistical stuff, and the other is about the syllabus. And the final one is about-- also, there's the links to the lecture notes, and there's links to some guidelines on the final project. So, in theory, I think all the information I presented today will be subset of what you can find on the website. And it's actually a very small subset. So I do encourage you to read through the documents to some extent, especially when you have some questions. Maybe first, go to see whether the documents answer those questions. And then, feel free to ask questions to us. So I guess the first thing I'm going to talk about is the prerequisite. So this course, as you will see, will be-- at least of some of the students in the past say this course is challenging. Of course, some students say it's on the easier side. They are different kind of backgrounds. So I think that's why we do-- this is my first slide right, because I think it's important for you to have some kind of backgrounds to be able to achieve your goals in this course. So I think the most important prerequisite is probably some knowledge about probability on the level of CS109 or Stats For example, you probably should be-- at least, you should have heard of these terms like dispersion, random variables, expectation, conditional probability, variance, density, so on and so forth, right? You don't necessarily have to know exactly all of them off the top of your head, but this probably should be something you have seen in one of the previous courses. Another thing is linear algebra. Matrices multiplication, eigenvectors. I guess linear algebra was offered in Math 104, 113, 205. Actually, there's a longer list of kind of relevant courses in the logistic doc, which taught linear algebra. And the most important thing I think we need is matrix multiplication and eigenvectors. And we also require some basic knowledge of programming, especially in Python and NumPy. So I think if you only know Python but not NumPy, I think that's probably pretty much fine because NumPy is, really, just some basic numerical operations. But if you don't know Python or NumPy but you know, for example, C++, I think that's still probably pretty fun because I think migrating from C to Python is pretty relatively easy in my opinion. But I think they'll have similar-- you just have to change the syntax. But if you know nothing about programming, I think that's probably going to be difficult because a lot of the homeworks, they have some math part and they have the programming part. And well, the most challenging thing I've seen in my past about homeworks is that, when you write a piece of code and something goes wrong-- which happens all the time; even when I write code, there's something that seems to be wrong-- and you don't know whether it's about the syntax or it's about the math, right? So these two things sometimes entangle together. So you thought that I derived the wrong equations. But actually, probably, you didn't use NumPy in the right way. So we're going to cover Python, NumPy in some of the TA lectures, just to kind of give you some refreshment. Or kind of if you didn't know them, you can learn something from the TA lectures. But I think you need to have some basic programming knowledge. Yeah. So we also have materials for the TA lectures, so you can-- we're going to have three lectures on each of these topics-- programming, linear algebra, and probability to kind of review some of the backgrounds for you. This is a mathematically-intense course-- at least, according to-- of course, depending on your backgrounds. But kind of a good portion of students found that this course is mathematically intense. So just kind of a heads-up. So it's probably good for you to have at least, at two out of the three, like among the three things, probably, you need to know at least two of them relatively well so that you don't get kind of entangled issues when you do the homeworks. But that's kind of why this is exciting and rewarding. With that said, the goal of this course is to give you the foundations of machine learning. This is the foundational layer. So this is simultaneously a introductory course to machine learning. We don't require you to have taken a machine learning course to take this, right? So it's an introductory course. But on the other hand, we hope that, after you take this course, you feel somewhat comfortable that you know enough basics of machine learning so that you can apply machine learning to some of the applications. Of course, if you really want to kind of be an expert in some of the applications like NLP and Vision, you probably have to take those courses. But this course, probably, will set up the foundations for the machine learning component of the general kindof like AI or other applications of AI. Right. So that's why this course actually covers a diverse set of topics and does involve some mathematics. We don't have mathematical proofs. Probably, we have a little bit proofs, but very little proofs. But we do have a lot of mathematical derivations, right? You probably have to do some kind of math derivations in the homeworks. And then, we are going to do derivations in the lectures, as well. By the way, if you have any questions, just feel free to stop me. I'm happy to answer any questions. Yes, the lectures are recorded. And you can find the recording on Canvas I guess. So the second important thing that I want to say is the honor code. It's probably a little bit kind of awkward to say this so early. And I think the reason is that, in the past, unfortunately we do have some kind of, there are some kind of issues-- let's be frank. There are some issues with the honor code violations. I don't want to see them. It's very sad for me to have to report students with honor code violation, but that happened in the past. So that's why I want to kind of put this up front. If you don't intentionally violate the honor code, I don't think there's anything you should worry about. But anyway, let me briefly say this is actually a subset of things that we have on the course website. But I think these are the important things. So for example, on one side, we do encourage you to have study groups so you can collaborate with other people on homeworks or on homework questions. But the thing is that you cannot-- OK, so you can discuss works on homework problems in groups, but you have to write down solutions independently. And you also have to write down the names of people with whom you discuss the homework. I'm copying this from the logistic doc, which is a little bit longer. You probably should read that piece of text in the doc, as well. So it's the honor code violation to copy, refer to, or look at written or coding solutions from a previous year, including not limited to official solutions from a previous year, solutions posted online, solutions you or someone else may have written up in previous years, solutions for related problems. If you apply common sense, you should be fine. But as long as you don't intentionally kind of do anything bad, don't be stressed out about it. But on the other hand, they were reporting honor code violations in the past. So we do kind of check the code, using some kind of softwares. And also, we all have TAs to kind of like deal with this kind of honor code violations. Anyway, I don't want to give you too much stress about this. But I do want to kind of put it up front here. OK, another component that I would to kind of like-- Besides homework-- homework is kind of like obvious, why we have to have homeworks-- another component of the course is the course project. So we encourage you to form groups of one to three people. And so, you do a project with three people, for example. And it's the same criterion for either one people or two or three people. And there are more informations on the course website. And typically, you'll apply machine learning to kind of some applications or some kind of topics you are interested in, right? So this is actually one thing that I really like about this course. Eventually, after every quarter, we got probably 100 submissions from the projects, and we see all kinds of topics-- you know, like all kind of applications of machine learning. These are just a list of topics we have seen in the past. And you are welcome to even work on other topics, right? So, of course, you can also work on just the pure algorithms for machine learning. That's also fun. But many people actually also work on applications of machine learning to other kind of topics, like music, and finance, which are kind of interesting. OK, great. So and we have homeworks. We have four homeworks, you'll see. And we are also going to have a midterm. There is no final exam. So the midterm, course project, and homework. Those are the main things for the course. And another component of the course is the TA lectures. So these are optional. You don't have to attend them if you don't find them to be useful. And also, there are actually two sets of TA lectures. So one type of TA lecture is the so-called Friday TA lecture, or Friday section. So we're going to have probably six to seven weeks of these lectures. The first three weeks will be about reviewing some of the basics, and especially the part of the basic concepts related to machine learning. And then, the other weeks are about more advanced topics, which are not required for the course but may be interesting for some subset of you. And we also have the discussion sections. The goal of this is to have some interactive sessions. Our course is pretty big, right? So you can feel free to ask questions. But I guess-- it's a little less interactive per person, compared to other courses. So we're going to have these small sessions led by TAs, which the goal is to kind of like imitate more traditional classroom settings and also work on kind of more of bridging the gap between the lectures and the homeworks, right? So basically, the TAs will largely work through problems that are very similar to the homeworks or even sometimes simpler than the homeworks so that, if you need them, they will help you to kind of like make it easier to solve homework questions, right, so and the midterms. And these kinds of sessions will be more interactive. The TAs, probably, will let you do some questions live and maybe present your solutions and discuss with other students, so on and so forth. And the exact time and format will be-- you can find them on the Google Doc about the logistics. Oops. OK. So there are many other informations on the course website, on the Google Doc. The doc is actually It's pretty comprehensive. So, for example, recordings, they can be found on Canvas. There's a course calendar on Canvas. There's a syllabus page which will link to the lecture notes. And we're going to have the Ed-- the platform for question answering. We do encourage you to use that to communicate with us, if that makes sense. Almost, in all situations, I think you probably should use Ed to communicate with us. You can have private posts or anonymous posts-- different type of posts-- depending on what you need. And if you don't have access to that, then you probably have to email some of us. You can email the head TA to add you-- to give you access to that. And there's Gradescope, which is used to submit homeworks. And there are some late day policies, which you can find in the doc, as well. So I guess one thing that I need to mention here, just as a heads up is that we don't allow late days for the final project. And the reason is just that, especially for spring quarter, the grading deadline is very tight. So it's pretty much just a few days after the final exam week. And especially because some of the students have to graduate and they have-- the timeline is very, very strict. And we don't want to make the final project deadline very early, because then, that would conflict with the homework deadlines, so on and so forth. So the final project deadline-- I think it's on Monday of the finals week. Double check that. We tried to put it as late as possible. But on the other hand, because the final grading deadline, we don't allow late days for the final project. And there are some other FAQs in the Google Doc, as well. Any other questions before I move on to the more kind of scientific topics? Just a question. For the discussion, will we be assigned to a specific session, or do we get to choose which discussion sessions we go to? Right. So currently, we have two TAs offering two discussion sessions. I think We will try to make sure that the materials in the two sessions are pretty much the same. And the times are kind of somewhat-- I think we haven't set a time yet. So you can feel free to choose any sessions you want to go. Probably, it's the best for you to consistently go to one session. Maybe the TA knows you better. But you don't necessarily have to. And this is also optional. We don't have to really, say, go to all of them, depending on your needs. Other questions? OK. Sounds great. So then, I will move on to the more scientific part of the course. So as I said, the main goal of this course is to set up you for the foundations of machine learning. And we're going to cover a pretty diverse set of topics in machine learning with some kind of mathematical way. So let me start by some definitions of machine learning. What is machine learning, right? As you can imagine, when you are speaking about such a hot topic that people are constantly researching on-- so there's probably not unique definition, right, that can fit everything, right? But I'm trying to find out some kind of historical definitions of machine learning, which I think describes the field pretty well. So in 1959, I think this is probably the first time the phrase "machine learning" was introduced by Arthur Samuel. He says that machine learning is the field of study that gives the computer the ability to learn without being explicitly programmed. So I guess without being explicitly programmed is probably something pretty important. For example, suppose-- I guess this is in the paper titled "Some Studies in Machine Learning Using the Game of Checkers-- Recent Progress." I don't exactly know what the game of checkers is, so don't ask me about what the rules of the game. But the point is that if you explicitly write a piece of code that plays the checkers, right, that doesn't really mean that you are using machine learning, rigt? So if you just say I have this fixed strategy I know which is actually very good for checkers. The first step would be this and the second step would be that move, right? I just explicitly code that in my computer with some branching algorithms, right, so that probably doesn't count towards machine learning, right? So if you use machine learning, you have to rely on the computer to learn without being explicitly programmed. So you shouldn't have explicit programming. But how do you learn, right? How do you give the computer the ability to learn? I think in the second definition of machine learning by Tom Mitchell, it describes more context or more context about how do you really let the program to learn without being explicitly programmed? I think it says that a computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T as measured by P improves the experience E. The rhythm is kind of nice. So I guess there are several important concepts in this passage. So one thing is that the experience E, right? So I guess let's still use this example of the game of checkers. So the experience, in this case, could mean, for example, the data. So data basically means that this possibly could be games played by the program with itself. It could mean games played by humans in the past. It could mean other kind of data you collected from other sources of information. Maybe you collect some data from, in this case, maybe-- you can collect data from other source of information. Maybe here, mostly, just collecting data by playing itself or playing by humans. So the experiences mostly means data. And there's a concept called performance measure, which is important in machine learning. Of course, there's no unique performance measure. For different tasks, you have different measures of performance. But this metric, the performance measure, is pretty important. Here, you can call it-- the performance measure could be the winning rate. It could be the winning rate plus, for example, the number of steps you play. You probably want to win as fast as possible. In some other cases, the performance measure, that could be how likely you can predict something very accurately, right? So you can define many performance measures. And I guess actually, in machine learning, if you look at the research, some papers are about understanding what's the right performance measure? What's the right way to formulate our problem? And some of the papers are about that, given the performance measure, how do you make the performance as best as possible? And also, there is this last sentence where it says that, if its performance at the task in T as measured by P improves with experience. What does improves with experience E mean? I think it means that, if you have more experiences-- if you have more data-- then your algorithm should have better performance. So that, in some sense, is kind of evidence for you to learn something from the experiences, right? If you have more and more experiences and your performance has not improved, maybe that doesn't really mean that you learned something, right? That could probably just mean that you just explicitly programmed some strategy, and that strategy probably wouldn't improve as you have more experiences, right? So in some sense, these last few words indicates that you are learning from the experiences. All right. So I guess the final thing is the tasks, ri ght? So here, the tasks is, really, winning. It's this context of playing the game, right? And we are going to see, actually, many different type of tasks, actually, just in this lecture. So they are tasks about predicting the labels of predicting something given the input, or the task could be finding out certain structures in the data. Or the task could be something like this, where you want to make decisions about how do you play the game. In any case, feel free to stop me. Just raise your hand. And I'm happy to answer any questions. This lecture is supposed to be very high-level, so feel free to ask any questions. So I guess speaking of tasks-- so this is a pretty simplistic view based on tasks. How do you have a taxonomy of machine learning, right? So I don't think everyone agrees with this 100%, but it's a reasonable baseline as a kind of high-level taxonomy. So I think supervised learning, unsupervised learning, and reinforcement learning. I'm going to introduce these separately. But it's not like these tasks are completely separated, right? So probably, the real figure would be like this. So there are some overlaps. So in reinforced learning, probably, you have to use supervised learning as a component. And as I said, for many machine learners, right, people are trying to figure out what's the right way to formulate the question to solve the applications. So maybe for some applications, you have to use two of these together. So they are not necessarily only tasks. But sometimes, they also can be viewed as tools or methods to solve your question, right? So maybe some questions requires a formulation that involves both of these three ingredients in some way. But as a first-order batch, you can think of them as three roughly separate type of tasks. So I'm going to introduce supervised learning first. So supervised learning-- I guess actually, in the lecture, we are going to use this house price prediction as a kind of running example. So the kind of the idea is that I'm going to have a relatively abstract way to introduce this. But you can think of the house price prediction as the application. So what you are given is a data set that contains n samples. And what are these n samples? These n samples are n pairs of numbers-- n pairs of vectors, where x could be a vector or a number. Let's say x is a number and y is also a number. So you have n pairs of x, y numbers, and you can actually draw these numbers here. You have a scatterplot, right, so every cross is really just one x, y pair. So the x means, as shown here in the caption-- so the square feet is x, and the y is the price, right? So basically, for every example, it's a pair of square feet and price. You are trying to use the square feet x to predict the price y. That's the task. And using Tom Mitchell's language, this data set is the experience. In a, probably, more than language, we call this data set our data. So basically, our goal is to learn, from the data set, how to predict the price given the square feet of the house. And so, basically, if x is 800, then what is y? And this x might not be seen in the data set. If your x already shows up in the data set, then it's easy. You can just read it off. But x could be something that you haven't seen in a data set. And one of the way that-- you probably have seen this in some of the other lectures or other courses, where you can do a linear regression. You fit a linear line. And then, when you predict, you just read off the corresponding number on this linear line. What is the corresponding y when x is 800? And of course, you can do other things. For example, you can try to fit a quadratic line, right? So which, actually, in this case-- in this artificial example I created, a quadratic line problem will fit the data better. And in lecture two and three, I think our goal would be to discuss how do you fit a linear model and how do you fit a quadratic model for the data to predict the house price? Of course, the house price prediction is only your application. You can imagine many other applications where you are given a data set of x, y pairs, and your goal is to predict y given x. For example, we can just simply make the house price prediction problem a little more complicated, right? So we said that, in the previous slide, we use the size to predict the price. But actually, you know, probably, more about the house, right? And you know, for example, the lot size. And maybe you know other things, right? Then, for example, suppose you also know the lot size. Then your goal could be to predict the price using size and lot size. And we call this kind of input different dimensions of the input x features. So size is one feature, and lot size is another feature. So now, you have two features of the particular house, and you want to predict the price based on the two features. And now, your data, if you draw them-- then there will be three dimensions, x1, x2, and y. And then, you can kind of plot them in this three-dimensional graph. And as I said, the kind of the things you know in a task time, right, that size and lot size, these are called features or input. And in this case, these features are two dimensional. And typically, people call the price label or output. And you are trying to find a function which maps the input to the output. So actually, another heads-up is that, in machine learning, almost every concept has more than one term for them. So you're going to see that some people call these features. Some people call these inputs. And in some other cases, probably, you have other names for additional things. We'll try to be comprehensive. I'm going to tell you what are the different names, but we're going to use one of them. I think in the lectures, mostly, we probably are going to use input because input and output is a little bit less ambiguous. Actually, features, sometimes, could mean other things, as well. And again, now everything is the same, in terms of the mathematical notations. The only difference is that, now, your x is a two-dimensional thing. Let me explain the notation here a little bit, which will be used consistently in the lecture. So the superscript here denotes which example you are talking about. It's the index for the example. And the subscript here denotes that the coordinates of the data. So x superscript i is a two-dimensional vector, and the x superscript i sub of the two-dimensional vector. And also, sometimes, like I have said, the price-- the y-- is called labels and outputs. And sometimes, they are also called supervisions. Or generally, if you say supervisions, that means the set of labels, right? That's why this is called supervised learning, because you do observe some labels in the data set. And also, the data set-- sometimes, people call it training data set or training examples. There are multiple names for it. Any questions? And you can also have high-dimensional features. Before, we only had two dimensions. But actually, in many cases, if you have a house listed online for sale, then you probably know a lot more about the house. And then, you can have a high-dimensional vector-- say, d dimensional vector-- and each dimension means something, right? Maybe the number of floors, the condition, the zip codes, so on and so forth. And you use this high-dimensional vector to predict the y, the label, that you are trying to predict. And in lectures 6 and 7, we are going to talk about infinite dimensional features, actually. So in some cases, you can combine these features into a lot of other features, where you can say I don't use x1 as my features, but I actually use x1 times x2 as my features-- living size times the lot size. I don't think that makes a lot of sense for this application. But in some other cases, maybe you can take the product of your two raw features-- two dimension of the input you have-- and use that as a new feature, right? And we're going to talk about how do you deal with infinite dimensional features, as well. And in some of the other lectures, we're going to talk about how do you select features based on data. So maybe not all of these features are useful. If you use all of these features, then maybe you can overfit, which is a concept we're going to talk about. You may kind of be confused here if there are too many informations available. So you may select something that is most important. So maybe, I don't know, all of this seems to be important. But maybe there are some other features that are not important for price prediction. And there's another concept that I'm going to introduce in the first lecture. We'll talk about this later, as well. So typically, there are two types of supervised problems. So this distinction is based on what kind of labels you have, right? So one type is called regression problem. So these are problems where your label y is a real number. So you are predicting, for example, something like a price, right? So this is a continuous variable. And there's another type of question, which is called classification. And these are cases where the labels are a discrete variable. What does that mean? That means that your labels are probably-- like you have two labels, yes and no, right? So you just have that this label set is just a discrete set with two choices, yes and no. For example, in this case, you can change the question. If you are given a size and lot size, you can ask what's the type of this house, this residence, right? Is it a house or a townhouse? So it's not a continuous prediction problem. It's really just predicting one of the two choices. and you can make this problem more complicated. For example, you can have multiple choices here, not only just two choices. And then, in this case, the way that we can kind of plot the data set-- one way to plot it is the following. So now, you have a two-dimensional graph where the x is the size and the y is the lot size. And then, for every dot, if it's a triangle, it means it's a house. And if it's a circle, it means it's a townhouse. And that's at least one way of how to visualize a classification data set, where the labels are discrete. You just use the triangle and circle to indicate a label of this example. And then, the kind of questions you want to solve-- sorry, my animation. The question you want to solve is that, now, if you give me a house, which is a two-dimensional vector with the size and lot size given as the input, and you're asking what's the type of this house? So whether it's a house or townhouse. And, one other way to do it is that you say you-- OK. Oh, I see. So you can fit a linear classifier that distinguishes these two type of dots. And then, your answer here would be, naturally, house because it sounds like it's on the right side of this-- correct-- this side of the line. So it probably should be consistent with all the other examples on the same side of the line. So I guess lecture about classification problems. And the next few slides, I'm going to talk about some broader applications of machine learning, which we won't necessarily will cover. I think image classification. I think, probably, we're going to have one homework question on image classification. So the type of question is that you are given all of these images, and every image has a label which describes the content of the main object in this image. Of course, in other cases, you may have multiple objects in the same image. But here, let's focus on a simple setting where every image has a single, important object. And then, your label is basically describing what this object is. And you are given this data set. This is actually a real data set created by Stanford people, led by Professor Fei Fei Li's team, which is called ImageNet. This is a very important data set that-- you probably should remember the name of it because this is pretty much the data set that, in some sense, made deep learning take off in the last 5 to 10 years. After the creation of this set and some of the new deep learning algorithms with neural networks in the last 5 to 10 years, we saw machine learning took off. And we were able to make a lot of progress because of the data set. And speaking of the data set, here, I'm only trying to say what's the format, or what's kind of the task, right? So basically, your x is some raw pixels of the images, where you just represent this image as a sequence of numbers. Actually, here is a matrix of numbers. And then, your y is the main object of the image. And you can have other kind of tasks in vision. For example, object localization or detection, right? So given an image, you can ask how do I localize-- find out-- each of the important objects with the bounding box? We are not going to cover anything like this because these are more specific to the Vision applications. So here, the thing is that your y becomes a bounding box. So how do you present this box? You don't have to know this. But if you are interested, the way to present a box is to present a box by the coordinate here and the coordinate here. And these two coordinates-- two points, four numbers-- will describe the box. So basically, a y will become four numbers instead of just one number. And actually, you can have, actually, more complex labels or y's on your other applications. For example, in natural language processing, which is the area to deal with language problems-- so, for example, machine translation-- you can have this problem where you want to translate, for example, English to Chinese. I don't know what happened with my pointer. So your x is the English sentence, and your y is the Chinese sentence, right? All sentences in the other languages. And now, you can see that the y-- even though y is a discrete set, the family of y is the family of all possible sentences in Chinese, right? So y looks like discrete, but y is much more complicated than the house versus townhouse application, right? So you have so many choices of y, like a almost exponential, infinite number of choices. So then, you have to deal with them in some different ways. I think we are going to cover a little bit about machine translation or this kind of question in one of the lectures that we added this year. I guess Chris mentioned that. We are going to talk a little bit about large language models for language applications. But on the other hand, this course only covers the basics of the foundational techniques of supervised learning. So we're going to talk about language applications. But if you really care about the particular applications, how to solve them the best way, then you probably would have to take some other more specific courses for those particular applications. OK. So before I move on to unsupervised learning, any questions about supervised learning? In the translation case, would you say that it a regression or classification problem? So would you say it's a regression problem or classification problem? I think I would say it's a classification problem because the family of y is still, technically, discrete, right, because you still have a finite number of possible y's because I assume you can say, let's say, the number of Chinese sentences is finite, right, even though the number is very large. But this is a good question because you cannot treat this as simple as-- you have to treat it in some slightly different way-- differently from the most vanilla classification problems. Because if you view this as the vanilla classification problems, then you're going to get into other issues, right, just because the set of y's is too big. When will you use infinite dimensional features? When you use infinite dimensional features? So I think I might not have a very clear answer right now because this does depend, a little bit, on some of the other things we're going to teach. But I think, generally, basically, sometimes you don't know-- OK, first of all, how do you create infinite number of features, right? So you have to create them from x, right? So for example I guess I-- I think I alluded to this a little bit, at some point. So, for example, suppose you have these number of features, right? Maybe these are hundreds. So now you have a hundred features. How do you create more features? You're going to use combinations of the existing features, right? And you can come up with a lot of different combinations. So you can have, for example, xd to the power of k. And k could be any integer, right? So that's how you create an infinite number of features. And why do you want to use them? Sometimes, it's because you don't know which one is the best. So you just say I'm going to create all the possible features I can think of, and I'm going to let the machine learning model decide which feature is the most useful. Or how do you combine these features? So that's why we use infinite dimensional features. In most of the cases in reality, you don't have to literally use infinite dimensional features. After you run the algorithm, you found out that some features are more important than others. But before we run the algorithm, we don't know which one is useful. So you let the machine learning algorithm to figure out which one is the most useful. And actually, one of the interesting things is that even the dimension of the features is infinite, it doesn't really mean that your runtime has to be infinite. So there are some tricks to reduce the actual runtime. So even though you are implicitly learning with infinite dimensional features, actually, your algorithm or runtime and memory-- all of these are actually finite. And actually, sometimes, they could be pretty fast, in some cases. These are great questions, yeah. Thanks for all the questions. Any other questions? OK. So the second part of the course will be about unsupervised learning. I think Chris will probably give about five lectures on unsupervised learning. So unsupervised learning, if you still use the house prediction data set as an example, the basic idea is that you know you are only given a data set without labels. You only see the x's, but not y's. So you don't know how these houses in a data set are sold in the past. So what happens is you're still using this townhouse versus house example, right? So if you are supervised and you have these triangles and circles here to indicate what are the labels. But you guys just got here. But if it's unsupervised, you just don't have this part of the information. But you just see this bunch of dots here in the scatterplot. But as you will see-- even if you just see this, where, as a human, once you see this, you somehow tell that, OK, this bunch of points here is very different from this bunch of points here. So maybe there are two type of residences here going on. Even as a human, right, even though you don't see the triangle and circle, you still kind of are able to tell there is something going on there, right? So that's kind of the nature of unsupervised learning. We want to be able to discover interesting structure in the data without knowledge about the labels. So you want to figure out the structure hidden in the data. So, for example, in this case, what you can do is that you can try to cluster these points into groups. You want to divide these points into groups. And you want to say that each group probably has some kind of similar structure. So this probably sounds like very good clustering because, at least, as a human being, you probably wouldn't cluster like this. But maybe a good algorithm probably would produce this. So if you produce this, then essentially you figure out there are two types of residents in this data set, even though you don't know the name of these two residents, because the algorithm wouldn't know townhouse or house-- these two words. But the algorithms know there are two types of things going on, here in the data set. And in lecture 12 and 13, we are going to talk about a few different algorithms for discovering the structures-- k-means clustering and mixture of Gaussians. And there are other kind of applications. For example, I think this is a paper by Daphne Koller's group, who is an adjunct professor, here at Stanford. So here, the kind of application is about gene clustering. So I think the idea is that you have a lot of individuals. And for this particular part of the genes, you can group the genes of individuals into different groups. So and you can see that-- I guess even visually, you can kind of see-- not sure what's going on with my-- OK. Cool. So you can see that there are some kind of clusters here. And it turns out that each of these clusters corresponds to how the individuals would react to a certain kind of medicine. And once you can kind of group people into groups, then you can probably apply the right type of kind of treatment onto each type of people. Now, here is another example, which is probably a little more easier to understand. So the type of question is called latent semantic analysis, which-- I don't expect you to understand what each of those means. It's just kind of a name, LSA. So the idea is that you kind of look at a bunch of documents. And every document has a lot of words, right? And you look at which words show up in which documents for how many types. So each entry here-- suppose you pick one entry-- it means how often the word power shows up in this corresponding column, the document-- Document 6, there. So every entry is how often the word shows up in the document. And if you see this, it didn't sound like there's any pattern here, right? So what's the structure? It's unclear. But if you use the right machine learning algorithm, what happens is that you can reorder or regroup these kinds of words and documents in the following way. Let me see that the video is working. Right. So basically, you permute the documents and words. And then, you see this interesting and sometimes block diagonal structure. Not very prominently, but still interesting enough. And now, you can see each of these blocks has some particular, interesting meaning. For example, here, this group of documents and words is clearly about something about space, kind of like shuttle, right? So shuttle, space, launch, booster. These are all about kind of like space traveling kind of things. So basically, you know that these kind of four words have similar meanings, and these four documents, or three documents are about this topic. So by doing this, in some sense-- at least, in this application-- you can figure out the kind of the topics in your data set, right? So you can figure out, probably, here, there's one topic, two-- one, two, three, four, five topics. And each topic is more likely to associate towards a certain type of words. And every document is, most likely, about one topic. And sometimes, it's about two topics. And then, what happens is that, once you figure out these topics, then you can use some humans to kind of interpret what each of these topics are. And then, given a new document, you can figure out what topic this new document is about. And this, actually, is a very popular tool in many of the social science because, in social science-- actually, even myself was involved in some of the projects in my PhD. So the social scientists, they have some text, right? They maybe have a lot of blog posts about politics, right? And they want to understand, for example, what trends happens in a blog post. So suppose you want to understand that. And then, you have to know what are the topics about each of the blog posts, right? You don't wan to kind of label them each, one by one, because maybe they have a million blog posts, right? So they use this to group the blog posts in certain ways. And then, they can do statistics to understand what happens with all of these blog posts. And this kind of applies to other kind of things beyond politics. Actually, you can even apply this to, I think, many things, like history-- what else? Like psychology, so on and so forth. And this was actually an algorithm discovered, probably, 20 years ago, or even maybe earlier than that. Maybe 30 years ago. And it was pretty popular, still, in social science. Of course, there are even more advanced algorithms these days, beyond this, which we are also going to discuss. So this is actually one of the more recent advancements. I think this was around 2013, 2014-- about seven, eight years ago. So what happens here is that you have a very, very large unlabeled data set which is called Wikipedia. So you just download all the documents from Wikipedia. There is no human labeling, right? They are just raw documents. And what you do is that you learn from these documents using some algorithms. And what you can eventually produce is the so-called word embeddings. So you can represent every word by a corresponding vector. And why do you want to do that? The reason is that these vectors has, basically, are kind of like the numerical representations of the discrete world. And there are some nice properties about these vectors that captures the semantic meanings of the words. So what happens is that similar words will have similar vectors. And also, that's what I mean by the word is encoded in a vector. So similar words would have similar vectors, and also, the relationship between words will be encoded in the directions of the vectors. This sounds a little bit abstract. Maybe it's easier with this figure. So actually, this kind of happens in reality, right? So if you look at the vectors, each point is the vector for that word, right? So Italy has a vector, and the vector, let's say, is this point. And France has a vector, and Germany has a vector. So you'll find out that the vectors for all the countries, they are somewhat kind of similar, in similar directions. So, for example, suppose you have another country, USA. Then you probably would find out the point is somewhere here, nearby. So all the countries have vectors that are in similar directions. And all the capitals are also in similar directions. So this is what I mean by the vectors encode some kind of semantic similarity between the words. And also, interestingly, the directions also encode some kind of relationship. So here, what happens is that, if you look at the difference between Italy and Rome-- right, this direction, this is the difference between Italy and Rome-- and you also do the same thing about Paris and France and Berlin and Germany, you will see that these three directions, they are very similar to each other. They are in these parallel positions. So at least one application of this is that, if you want to know-- suppose you are given, let's say, US, which is a vector here. And you want to know what is the capital of US? Where is the capital of US? You probably should go along this direction to search for a point. And that's likely to be the capital. Maybe you'll find DC or Washington. I guess you'll find DC there because I think Washington is ambiguous, which is a little trickier. Actually, this is an interesting thing, right? So the Washington vector would be tricky. The vector for Washington will be not clear where it will be because Washington has multiple meanings, right? So it's a state. It's a person. So actually, this is-- sometimes, you have this ambiguity. And then, you can have these kind of more complex clusterings of the words. So here, I guess what happens is that you can also use these vectors to cluster the words into groups. So, for example, you have these scientific words, some of which I don't even know. And then, you can use the clustering algorithms for the vectors to figure out what kind of topics or what kind of scientific areas they belong to. And you can also have hierarchical clusterings to deal with certain kinds of overlaps because, for example, mathematical physics probably would be closer to the physics vectors and the math vectors both-- somewhere in the middle of the math and physics vectors. So there are many different kind of interesting structures in all of these word vectors which you can leverage to solve your tasks. I'm a little conflicted here, just because to exactly do all of this, it requires a little bit more things that we haven't discussed. But we will discuss some of this in later lectures. And most recently, in the last two or three years, there is a new, say, trend or-- there's a new breakthrough in machine learning, which is these large language models. I think many of us are very excited about it. Chris has mentioned about that. And at Stanford, you have, actually, a lot of people working on these large language models. And roughly speaking, these are machine learning models for language, and they are learned on very large-scale data sets-- for example, Wikipedia, as I discussed before, or sometimes even bigger than Wikipedia. So you can download a trillion words or maybe 10 trillion words online because there are so many kind of online documents. And you collect all of these documents, and you learn a gigantic model on top of it. These are going to be very, very costly. Even training the single model would probably cost you $10 million, just for one time. So they are very costly, but they are very powerful because they can be used for many different purposes. And then, particularly here, I'm talking about this breakthrough called GPT-3. You're going to hear this name, probably, pretty often. Not very often in the lecture. Pretty often in general-- in the next few years, I think. Or you probably already heard of it. In the lecture, we're going to talk about this in one lecture. So GPT-3 is this gigantic model, and they can do a lot of things. So I'm downloading this example from their own blog post so you can use this GPT-3 to generate stories I think here, what happens is that you give human, some person-- write this paragraph about something. So like some, I guess, mountains or valleys. And then, the machine learning model can just generate some story and some kind of very coherent and meaningful text afterwards. If I don't tell you these are generated by machines, you probably wouldn't know that they are generated by machines. So you probably would guess this is written by some authors. So that's one application-- one way to use this model to generate stories. And you can also use this model to answer questions, right? So here, you give this model this long paragraph. And then, you can ask-- it's kind of like a SAT-- I'm not sure. This is like a GRE question. I'm not sure whether all of you know GRE. So these are just basic question answering about the passage, right? You can ask what is the most populous municipality in Finland-- and this is information you can find in the document-- and they would answer the right thing. So that's another application. And you can also use this to do other things. For example, you can just write in the text, saying please unscramble the letters into a word and write that word. And then, you give this to the model, and the model will just change the orders of the letters to make it a meaningful word. And you can ask, for example, these simple numerical questions-- what is 95 times 45-- and it gives you the right answer. So what's amazing about this is not because it can solve all of these tasks, just each of these tasks. The amazing thing is that you learn on this gigantic, unlabeled data set. You didn't specify what tasks you want to solve, right? The only thing you see is this gigantic data. And then, the single model can be used to solve multiple tasks just by interacting with different things. If you want to solve this task, you just write it in the human interpretable language, And if you want to solve another task, you just do something else, a slightly different kind of like phrasing. Then they can be used to solve multiple tasks. And that's why we call them foundation models-- at least, in a white paper written by Stanford people. So in some sense, they are foundations for-- they can be used for a wide range of applications, sometimes, without a lot of further changes, right? So the model itself can do a lot of work for many tasks. So I guess I'm supposed to stop at 4:30. Sorry. OK. So I still have some time. Any questions? Going back to the multiplication problem, I was just curious if we happen to incorporate the entire internet, doing the math problem [INAUDIBLE] Sorry, can you-- Yeah. Do you know if-- what was the corpus was used for this? But if-- the internet as a whole, is it the case that, for these simple math problems, how were they able to get it together? So if I understand the question correctly, so one concern is that whether this 95 times 45 already show up in your corpus. But maybe you just memorized it. That's one possibility. But I think that's not the case. So, of course, some of the numerical problems, like some of these multiplication problems, show up in the corpus. So you will find one document online about what is 12 times 35. But I think you wouldn't find the documents about all pairs of mathematical equations. So multiplication of pairs of two-digit numbers, I don't think you will find all of them. So there is some kind of extrapolations where you see some-- of course, you have to see something, right, so that you can learn from them. So you probably have seen a lot of kind of numerical operations, all kinds of mathematical formulas in the training corpus. And then, you extrapolate to other instances, right? So you learn from some basic stuff. Then, you can use the model to output multiplications of, for example, longer digits. Does that make sense? Does that answer the question? My other question is sort of like, how did they enter it as no pollution in the corpus? Or is that it's something that, regardless of the pollution, you have enough documents where the math is true? Right. So how do you make sure that-- by pollution, I guess you mean that how do you make sure that, in the training corpus, they are not all pairs of double digits, right? So I think they do run some tests to check that. So of course, you cannot make sure-- is that what you mean by pollution? Or do you mean, by pollution, something wrong about the-- False information. False information. [INAUDIBLE] OK. So that's a great question. So I think, abstractly speaking, the question is about how do you make sure your training corpus doesn't have wrong information for you? All right. So I think, definitely, there is wrong information in the corpus. But I think what happens is that there are probably more correct information than wrong information. And you somehow reconcile between them, and you kind of pick the right thing. So that's, largely, what's going on. But of course, if you are very specific about-- so there's an area called data poisoning. So you can actually specifically change your training data in some special way-- actually, you just change a small number of training data so that your model learns something completely wrong. So that's actually possible. But that requires a adversarial change of the training corpus. So on one side, this is a very bad thing because, if someone does something out of a certain line and you use those kinds of documents to train your model, that's a huge risk. On the other hand, because you have to be adversarial-- so, at least, right now, this kind of adversarial poisoning is not happening very often, just because it's not very easy to achieve them. OK. Any other questions? OK. Cool. Yeah, these are great questions. I like to have more questions. That's great. So the last part is about reinforcement learning. This will consist of, probably, two or three lectures at the end of the course. So the main idea of reinforced learning is that, here, the tasks, roughly speaking, are about learning to make sequential decisions. So I think there are two things. One thing is that you are making decisions. So before, in both supervised learning and unsupervised learning, in some sense, you are making predictions, right, at least, in supervised learning, it's pretty clear. You are predicting y's. But here, you are talking about decisions. And what's the difference between decisions and predictions? Decisions have long-term consequences. So for example, if you play chess-- so you make some move, and that move will affect the future, right? So you have to think about long-term ramifications. And also, this is a sequential decision. So you're going to take a sequence of steps. So when you take the first step, you have to consider what this step will change my game, and what happens in the future. So that's why you can see that. These kind ofreinforced learning algorithms are mostly trying to solve these kind of questions where you have to make a sequence of decisions. So, for example, when you solve go-- you've probably heard of AlphaGo. And another example is, for example, you want to learn a robot. So if you want to control the robot, you have to take a sequence of decisions. How do you change the joints? How do you control? Actually, there are always multiple things you can control for a robot. And how do you control all of them in a sequential way? So here, I'm showing this in the simulation environment. So this is a so-called humanoid, which is a robot that imitates the human. You can control a lot of joints in this robot. And your goal is that you want to make this robot be able to walk to the right as fast as possible. And this is what happens with the reinforced learning algorithm learning here. So it's trial and errors, to some extent. So what you do is you first try some actions. You first try to do something like this. And then, you figure out that this didn't work well. It falls. And then, you go back to say I'm going to change my strategy in some way. So I know that some strategy is not going to work. I'm going to try some other strategies. And maybe I know some strategy is, actually, partially working because at least the humanoid is doing something, right? It does walk to the right for one step. It just didn't keep the balance. You know some part is good. Some part is bad. And then, you try to go back to change your strategy. And then, you probably can walk a little further. Something like this. And then I guess I'm going to fast forward to iteration 80. I think 80 works. I forgot whether I have-- oh. Actually, 80 still doesn't work perfectly. You can see, he's walking in a weird way. And I think iteration 210 is-- it can keep walking. But still, it doesn't sound very natural. You shouldn't expect that the humanoid walk as naturally as humans, partly because there are many different things. So maybe, for the robot, this is the optimal strategy. That's possible. But, of course, I don't think it's the optimal strategy. But it's possible that an optimal strategy for the robot is not the same as the optimal strategy for us. And generally, as I alluded to, the very high-level idea of reinforcement algorithm is that you have this loop between training and data collection. So before, in supervised learning and unsupervised learning, we always have a data set, where someone give you a data set and that's all you have. You cannot say, OK, give me more examples of the house prices. So you have to work with what you are given. But here, in the reinforced learning formulation, you often can collect data interactively. I see some question. Is that a question? Sorry. So here, you often can collect data interactively. So meaning that you-- for example, in the humanoid example, you try some strategy and you see that humanoid falls. Then that's the data you see additionally, right? So then, you can incorporate that new data back to your training algorithm and then change your strategy, right? So you have this kind of loop where, on one side, you try the strategy and collect feedbacks. On the other side, you improve your strategy based on the new feedback. So in some sense, you have a data set that is growing over time. The longer you try, the more data points you're going to see. And that will help you to learn better and better. OK, so that's my last slides about reinforcement learning. Any questions? Does the feedback happen after each step of the decision, or at [INAUDIBLE]? Oh. Sorry? Oh. Or after the integration is complete, then after-- Right. So is the feedback seen after each step of decision, or is it after something else? So there are many different formulations. This is a great question. So the most typical formulation is that you see the feedback right after the decision you make. But sometimes, it's not realistic. For example-- let me see. What other examples? So I'm blanking on what are the best examples to show. But in some cases, you don't have the feedback right after. And sometimes, even, you have the feedback right after the decision. You cannot change your strategy right after the decision so just because, for example, there is a computational limit or you have to really do something physical to change your strategy on a humanoid. Or maybe there's some communication constraints. So there are multiple different formulations in reinforcement learning. I think if you have a delayed reward, I think that's called a delayed reward problem. And sometimes, we also have this so-called deployment round, in the sense that-- so this notion of a number of deployment means that you can only update your strategy for, for example, five times. So you cannot just constantly change your strategy. And then, you can ask this question-- what's the best way to do this? One other example is that-- suppose you are using reinforced learning to control a nuclear plant, right? So you probably don't want to just keep telling-- you run an algorithm, and the algorithm keeps telling the nuclear plant to change their strategy to control them every day. That sounds risky and also kind of inefficient. There are many problems with it. So probably, you are going to say that I have to do some experiments for a little bit, for six months, and then I figure out one strategy that I almost can guarantee-- I can guarantee that this new strategy is working better than the old one. And then, I deploy it and then collect some new feedback. This is a great question. And another thing I would like to mention is that, in many of these problems, they are multiple criterion. For example, with reinforcement learning, if you want to control the nuclear plant, there is a safety concern. So then, you have to care about whether your strategy is safe or not. But for the humanoid, probably, it's fine for the humanoid to fall down, to some extent. But still, you cannot really let it fall down so often because it will hurt your hardware. So in unsupervised learning, there are other constraints. For example, there are constraints about how long is the training time? That's the typical metric. And there is also a constraint about how kind of powerful or how kind of multipurpose these models are, right? How likely they can solve multiple tasks. So eventually, this is a very-- especially if you look at the research community, there are different people care about different metrics, just because all of these metrics have their own applications. So the real kind of scenario is much more complicated than this, in some sense. There are a few other lectures here, about other topics in the course, which are actually in between some of these big topics. So one of the topics we are going to spend two lectures on is deep learning basics. If you heard of the word-- so maybe some of you have heard of it. So deep learning is the technique of using the so-called neural networks as your model parameterization. So this can be used together with all of these tasks, right? It's like a technique that can be used in reinforced learning, that can be used in supervised learning, unsupervised learning, and in many other situations. And this is something that is very important because-- because of deep learning taking off around last seven years, we see this tremendous progress of machine learning-- because of these techniques, a lot of things are enabled by these deep learning techniques. And we're going to also discuss a little bit about learning theory, just for one or two lectures. So in some sense, actually, we don't really talk that much about the core theory. In some sense, the goal here is to understand some of the trade offs of some of the decisions that you should do when you train the algorithms. So what's the best way to select features? What's the best way to make your test error as small as possible? And also, we're going to have a lecture on how do you really use some of these insights to tune an ML model in practice. As the algorithm implementer, what kind of decisions do you have to pay attention to? So on and so forth. So I guess we're going to have a guest lecture on the broader aspects of machine learning, especially robustness and fairness. I guess machine learning has a lot of societal impact, especially because machine learning, now, is working. You can really use it in practice, and it will create some kind of societal issues. Actually, a lot of societal issues. And these are things that we should pay attention to. I'm not an expert in this area. We're going to have a guest lecture-- James Zou, who works a lot on this, to talk about fairness and robustness of machine learning models. OK, I guess this is all I want to say for today. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Gaussian_discriminant_analysis_Naive_Bayes_I_2022_I_Lecture_5.txt | From today I guess, you're going to see me for at least a few weeks. We're going to cover some supervised learning algorithms, and we're going to talk about deep learning, and then I'll pass on to Chris to talk about unsupervised learning. So I think I'm going to be in charge in the next three or four weeks. And I'm going to use the board. Part of it because I think there is a little bit more memory on the board, right? You can review things that I wrote like 10 minutes ago even. I don't know whether that's the best for everyone. I think in the past I've surveyed students and someone prefer the board, someone prefers the Zoom, the iPad so I'll give it a try with this. But any comments or any kind of suggestions are welcome and we are open to change the format as well. But for today at least I'm going to use the board. And I think the video is able to capture the board almost the same as the iPad I hope. OK. So I'm going to talk about the so-called generative learning algorithms. So the next two lectures will be about this. I'm going to define what does it mean by generative learning algorithms. And there are two types of generating learning algorithms that we are going to cover. One is called Gaussian discriminative analysis. I guess these are all new words that I have to define as I'm introducing these things. GDA. And another type of algorithms is called Naive Bayes. OK. So I guess let me get started. So I will start by defining what do I mean by generative learning algorithms. To kind of define these terms I think is useful to compare with what Chris has introduced in the last two weeks. So in the last two weeks, the type of algorithms Chris introduced, we call them discriminative learning algorithms. So, discriminative learning algorithms. So the reason we call them discriminative learning algorithms is the following. So in some sense, the definition is that if you model or you parameterize the relationship, the conditional relationship, of y given x, then we call this discriminative learning algorithms. And I think if you recall, this is the type of models we consider in the last two weeks. So we model y as a linear function for x. Maybe y is linear function of x plus Gaussian noise. Or maybe y is the function of x plus exponential family or something like that. So for example, in the most general format in the last two ways can be summarized as you think of the x, the y given x parametized by theta. This is a distribution of y given x parameterized by theta You write this as some exponential family. Exponential with some distributions in exponential family. With some parameter. With some input eta. And this eta is a linear function of x. For example, you can say this is a Gaussian distribution with mean eta. And that's just the standard linear regression, right? So this is why we call them discriminative learning algorithms. And today, we're going to talk about a so-called generative version. Generative learning algorithms. The basic idea is that you are going to model or parameterize. Your model means is a word. Basically means parameterized or you have a mathematical model for the conditional for the joint distribution. p of x comma y. The joint distribution of x and y, using a simple chain rule, you can write this as x given y times p of y. So you model both these two quantities. Is this the-- there's some light is flashing. Some-- should I-- are you bothered by it or not? I'm fine with it. Just-- OK. No worries. OK. I think it flashes every minute or something. Anyway. So you model the joint description by modeling each part of this tool. And particularly, you model the joint distribution by modeling the distribution of y and the distribution of x condition y. Here, x and y are not symmetric. y is the label. x is the input. And typically, they have very different meanings. So y could be something like the price of the house and x is the features. Like what you know about the houses. So like the square feet, the lot size, so and so forth. And recall that the x is the input. And this is maybe something like a label or some kind of class. If you have classification, this is about class. Like maybe positive sentiment, negative sentiment. And so basically, this is the distribution of the input given the label and this is the distribution of the label itself. And sometimes we also call this a prior for the label. A prior for the class or the label. Because this is what you believe. Like for example, suppose y is 1 means positive sentiment. Suppose you are classifying the text. Then this is a distribution over two labels. Positive and negative. And this is the prior that you have for how many positive examples or negative samples are in your data set. So after you model and parameterize this, you can learn these two distributions. You can learn the distribution. Learn p of x and y. And p of y, we're going to say how do we learn them. And after you learn both of this, you are going to still solve the classification problem. Your goal is still the same. You are trying to classify. You are still trying to compute. So this is the test time. You are still trying to compute, for example, p of y given x. You're still trying to classify what's the chance of each of the label or you probably want to get the max of y given x. We are going to talk about exactly-- you care about. But essentially, you still care about the relationship of y conditional x. And how do you get this? You got this by Bayes' rule. Meaning, so recall that p of, for example, y given x is equal to p of x given y times p of y over x. So you know this quantity. You know this quantity. I'm assuming you already learned this too, right? And how do you know the denominator? Then you can just write this as-- the denominator is really just, you take sum over y, p of x given y times p of y. Maybe just for the sake of notation, let's call this y prime, right? So this is the standard total law of probabilities where you compute the marginal probability of p of x. You use that as the denominator and then you also use these two quantities on the top. Actually in many cases, you don't necessarily have to compute the p of x. We'll see exactly how it works. But roughly speaking, after you know these two things, you can know-- after you know x given y and y, you can know y given x by doing some Bayes' rule. By the way, feel free to stop me at any point. Just to raise hand or just speak. Maybe. A few first. [INAUDIBLE] So let me repeat the question. Is it true that the discriminative learning algorithms cannot work on non-discriminative in this family. No. I think the discriminant learning algorithms can also work with other possible distributions here. So as long as you specify y given x and you parameterized that by some parameter theta, you can in theory. You can still learn them using similar type of methodology. I'm going to discuss the methodology as well. But if you have a exponential family, then there are several benefits. For example, the quiz discussed this. Many properties, nice properties of exponential family. If you don't have them then you cannot use those properties. You have to use something else or you have to rely on optimization. Or sometimes it's challenging depending on the cases. But in principle, you can have other distributions here. Maybe let's just do this other. Yeah. [INAUDIBLE] Yeah. Yeah. I'm going to talk about that. This is just the general framework. So the difference between the two that I can guess is that in discriminative learning algorithm, [INAUDIBLE] trying to [INAUDIBLE]. I'm sure you won't have any parameters [INAUDIBLE].. Actually, you will have parameters because I'm going to parameterize these two distributions and I learn them. So, yeah. I'm going to talk about that as well. [INAUDIBLE] the same kind of p of y given x. So this is a-- yes. So I'm also going to discuss the differences. Like, what's the high level differences, why don't we do this. But I think it's easier to discuss those once I tell you a little more about concretely how this works. But so far, you are right. Basically, this is a somewhat seemingly circuitous way to get p of y given x. It's not direct, right? Yeah. I think I'm going to discuss the differences probably later in the lecture just because it's easier when I have some examples. Thank you. OK. Any other questions? Would it be possible to write a little bigger please? Sure. Yeah. That's a great suggestion. I think that's probably also useful for the recording as well. And also, feel free to remind me again because this happens too in the past as well. Like, every time after a few lectures, I stopped writing big. Even after a few minutes sometimes. OK. So this is just a very high level introduction. So we're going to talk about two instantiations of this general idea. So the difference is really just that one case is a continuous x and the other case is discrete x. And this continuous-- it's called the Gaussian discriminant analysis. And for the discrete x, we are going to focus on our application which is the spam filtering. And today, I think we're going mostly talk about a continuous case. And next lecture, I'm going to talk about the spam filter. All right, so. So now, one example, how do we instantiate this plan? So for GDA, what you do is you say, I'm going to suppose x in r d. I'm going to jump the convention. Just because here I'm not going to use the bias. At least in the modeling part, I'm not going to use the bias. Don't worry too much about it. It's just, we don't have the x 0 as well. I know it doesn't really matter that much. So the main thing is I'm going to assume. You can say this is assumption, you can say this is a modeling assumption. I'm going to assume p of x given y is a Gaussian. Gaussian distribution. So what does that really mean? That really means that you write this. You say, x given y is following some Gaussian distribution with some mean and covariance. And here, note that x is a high dimensional vector so I'm going to have a high dimensional multivariate Gaussian distribution. With a mean and be some covariant sigma. So it's probably useful for me to briefly kind of digress a little bit to briefly talk about some basics about multivariate Gaussian distribution. These are just some very quick review if you haven't seen this. But I'm assuming that you know something about one-dimensional Gaussian distribution. So just a very quick digression. So if you have a multi-dimensional Gaussian random variable. So what happens is that, suppose you have some random variable z sample from this Gaussian distribution. Which means, mu and sigma, covariance sigma. So here, mu is a three-dimensional vector and sigma is a matrix. Is the so-called covariance matrix. So the property you need to know is that, as the name suggests, the expectation of the z is supposed to be the min of the mean parameter. And the covariance of the random variable z, which is defined to be the expectation of z minus expectation z times z minus expectation z transpose. This is the covariance matrix. So this is how you generate the z from this Gaussian distribution parameterized by these two parameters, mu and sigma. And the resulting random variable z would have these two properties. The mean is mu and covariance is sigma. And you only need two set of parameters to describe uniquely a Gaussian distribution. You also know the density of this Gaussian distribution. So the density of the Gaussian distribution is something like this. I don't expect you to remember the formula because I know I remember it after I teach it so many times. But before I taught it, I don't think I remember it in my graduate school. But the formula is that-- not sure whether you can see this. Something like this. And here, this is the determinant. OK. Great. I see some question. Sure. Maybe I'll start with the one. What does that denominator say? What is the deno-- The denominator. Yeah. There. So this is 2 pi to the power of d over 2. And the determinant of sigma is the power of 1/2. I'll write even bigger. All right. Thank you. And then, what does the little triangle over the equal sign? Oh. This is just the-- oh. Why I'm doing this. Oh, this is just a definition of the covariance in case you don't know the definition. So. OK. Thank you. Yeah. Yeah. I'm just using that to indicate this definition. And by the way, this formula I don't think we really have to remember it. The most important thing is you have a constant times some exponential of some quadratic form of Z. So is z [INAUDIBLE]? z is a vector. Oh. Yeah. That's a good question. So this is why this is a little more complex than one-dimensional case. If you are familiar with a one-dimensional case, then this will be [INAUDIBLE] of mu, and this will be a [INAUDIBLE]. Sometimes people write it sigma squared. And the sigma will be just the-- so for one-dimensional case, this is just a variance. And now it's the so-called covariance. So in some sense, if you at sigma of ij, this is really just the correlation between-- this will be the expectation of z i. z the i is squared minus u i times z j minus mu j. In some sense, the entry of the covariance matrix is capturing the correlation between two coordinates. Of course you have to remove the mean to match the correlation in the right way. But you match the correlation of the two coordinates of this random variable. I saw some other questions. [INAUDIBLE] Yes. So x and z are the same thing here. I use z because I want to be abstract. So later, I'm going to have a little more-- I'm going to have to change this a little bit too. But this is just for abstraction. I use a different variable. [INAUDIBLE] The second term in sigma. On the right side. Here. So here because they are scalars, so that's why I didn't have to transpose. So this is a scalar. This is a scalar. Right? And here, the reason why I have transposed is because then-- and you make it matrix. So, I don't know. I think some of you probably are familiar with this so I don't want to spend a lot of time on this. But some of you are probably not very familiar. So I think I used to have some-- let me show you some other pictures to get a little bit more sense on the covariance. Let me see. How do I connect to this? Do they know this? How do I signal anything to them? I don't see they are capturing the video. The screen. Anyway. Anyway, these are the slides. So it should be-- oh. Yeah. OK. Great. So it used to be the case that I make slides for this part of the talk. I just make three slides. But then I realize that maybe it's just easier for me to show you the lecture notes. Because then you know where to find them again. So I'm not being lazy here. It's just-- it's also easier for me of course. OK. So these are some visualizations of the density function here. So the first set of the-- just look at the figures. This is a two-dimensional case, right, we have for these two. And you visualize the density of the Gaussian distribution. So density always looks like something like this. And these are the cases where the covariance is identity. It means that when the covariance is identity, it means that-- so that when the covariance is additive matrix. It means that there is no correlation between any pairs of coordinates. Like i and j, when i and j are not the same, they have no correlation. And the results-- the shape of this Gaussian is always spherical. So basically, when it's identity, it means that you have equal strength in all the-- basically, you just have the same strength in all the directions. Because every dimension looks the same in some sense. So basically, that's when you see this kind of very spherical shape of the density function. The one thing is that the size of this density function depends on the scalar in terms of the identity, right? So if you have identity, then I think this is the leftmost one. But if you make the covariance two times bigger than your density function will be supported, in some sense, like more supported on a larger region. So I think that's the last one. And then in the middle one, you have a smaller covariance. So in some sense, the covariance is describing how at least how large this-- like, how large the shape of this density function is. And also, it's defining two things. One is the size and the other thing is that what's the correlation. What's the kind of orientation of the shape. So maybe one way to think about this is that if you look at the second rows of the figures, so these are cases where the covariance matrix are no longer identity. And they are for example in the third figure here. So you have some correlations between two directions and then you see this ellipsoid kind of shape is rotated into that direction. Just because in that directions, the two quadrants are more correlated. So it's more likely that these two quadrants are simultaneously bigger or smaller. So that's why in that direction you have for you have more kind of like a mass in that direction. And you can see that the difference between these three figures is that the correlation along that direction is bigger and bigger. In the first one, there's no correlation that spatial direction, x is equals to y direction. And the second one, you have a little correlation so that's why it goes towards that direction. And then in the third one, you have more correlation so it's even more skewed in some sense. Any questions? So I guess maybe another way to look at this is that you can look at the contours of this thing. So basically, you look at the level set of this density function. Level set means that the set of basically the set of points with the equal density. And then you can see that you get this kind of ellipsoid. And the same thing, right? So if you have more correlations then this ellipsoid will become more tilt towards one spatial direction. So for example here. I guess if you can see my pointer. So this means that in this direction like-- so I think if this is x1, this is x2, right? So if you see these kind of contours then it means that x1, x2 are equal are likely to be simultaneously bigger or smaller. So that's why they have correlation. And if you see in this one, then you're going to have some reverse type of things. So if x1 is bigger, then x2 is likely to be smaller. And that's when you see this kind of shape. There's no need to exactly understand this. It's just some kind of rough intuition. In reality, you don't necessarily have to exactly visualize all of this. But these are-- any questions? I know sometimes this could be confusing, sometimes this could be very enlightening. I don't know, like-- depending on-- feel free to ask any questions. OK. OK. So I guess we'll move on back to the more messy stuff. OK, cool. So I have introduced the multivariate Gaussian distribution. And now, I'm going to go back to the Gaussian discriminative analysis. So we are going to parameterize x given y as a Gaussian distribution. By the way, I think I forgot to mention that here. In both of these two cases, the y is always discrete. You can make y non-discrete as well. But here, we're only looking at a case where y the label is always discreet. So now, let me continue with the GDA. So as I said, y is discrete and we are only going to assume that there are two labels. Say, 0 and 1. I guess-- I don't know why. I think I missed the one small thing here. But let me just. After the running examples we used to have for this GD application is that you can think of like you have some kind of-- I guess for example, cancer classification. So like a benign-- a malignant classification. So you have some maybe x1, x2 two-dimension of inputs and you see a bunch of data like this. So these are cases where you have benign cancer. Think of maybe an x1 and x2 as a measurement of the patients. So maybe blood pressure or some size of certain kind of tumors. And for every patient or every case, you know whether this is a benign cancer or not. These are the bad case. The malignant cancers. And the question is that you want to classify these examples into two classes. So that in the future if you see one more example, one more example here maybe, you want to know whether this is a benign one or not. And the label is basically here. Let's call this label So that's kind of the target applications we are thinking about. So now I'm going to parameterize what is x given y. X given y. And I need to specify two cases where I want-- one case is that what is the x description of x given y 0. And the other case was the description x give y as well. I'm going to make both of these Gaussian distributions. So I'm going to assume that x given y 0 is a Gaussian distribution. And the Gaussian distribution has mean mu 0 and covariance sigma. So here, I'm in the high dimensional case. So mu 0 is Rd, In this case, d is 2. And sigma is in Rd by d. And for the other one, my model assumption is the same. I'm going to assume this as mu 1 and sigma. So the same covariance, that's just for convenience, I can use different covariance matrix. But here, I have to use a different mean. Because clearly if you fit the Gaussian distribution to this bunch of points and you have a Gaussian distribution for this kind of points, you're going to have different mean. So that's why I'm going to have mu 1 here and mu 0 here. So mu 1, mu 0 sigma, these are the parameters. Oh, yeah. Go ahead. [INAUDIBLE] is that the same covariant distribution? Yeah. So this is mostly just for convenience. You can make them not the same covariance matrix. In terms of the optimization, in terms of learning this these things, it's going to be more complicated. So it's still learnable at least with some advanced techniques. But it's going to be more complicated. So here, it's really, in some sense, a simplification. A simplified assumption. [INAUDIBLE] given [INAUDIBLE] x given y equal to-- Oh, yeah. So that's a good question. So this is like this. So you give me this event y, 0, what's the distribution of x? I'll assume they're providing separate [INAUDIBLE] distribution. Is it because they are different in the [INAUDIBLE]?? That's a good question. So how do you model the x and y? So in many cases, you have many, many different choices. But you have different covariance. You can have different means. Or you can even model them in different ways. So here, I think-- at least-- if you look at the data, you see that the distribution of x given y is 1 and x given y So probably it makes sense to model them separately, right? If you model the whole thing as a joint Gaussian, I think it doesn't look like a Gaussian. That's pretty much the reason. OK. Cool. So are we done? So we haven't done with the model yet because we only model x given y, right? Remember that we also have to model p of y. You need the p of y and p of x and y to know the joint probability distribution and then you can use the Bayes rule. So how do we model the p of y? This is relatively easier because p of y is only a distribution over two possible choices. Y only have two choices, 0 and 1. So basically, you just have to have two parameters, right? So one parameter is supposed to model p of y as one, let's call this phi. And then you have p of y is easier, zero. Let's call this one minus v. Because the sum of these two has to be 0. Plus, the sum of these two has to be 1. So in some sense, you say y is from this Bernoulli distribution. With parameter phi. This is just another way to say this. OK? So basically in summarize, what are the parameters? So the parameters-- I guess this goes back to the question. Someone asked this question. So we do have to have parameters even for the generative learning algorithm. And the parameters are mu And our goal, our next step, would be, we want to learn these parameters so that we know p of x and y and p of y so that we can compute p of y the next. So the next part is about fitting parameters. OK? So how do we learn the parameters from data? So we learn the parameters. The general principle is maximum likelihood. I think probably Chris has talked about this word maximum likelihood probably once or twice in the previous lectures. But here, the maximum likelihood is a little bit different. I'm going to compare what's the difference between this likelihood from the one that we discussed before. So first let me define what maximum likelihood here means in this setting. So by maximum likelihood, I first have defined likelihood. So likelihood is this basically the chance of seeing a data given the parameters. So it's a function of the parameter. So if you have this parameters, phi, mu 1, mu 0. Mu 0, mu 1, and sigma. You can define the likelihood of these parameters. This is the chance of seeing all your data given the parameters. So you hypothetically think that all the data generated from this distribution. And you look at what's the density of your data under these parameters. Sorry. [INAUDIBLE] Oh, even bigger? OK. Cool. Yeah, sure. And let me also clarify the notation here. So x superscript i, I think probably Chris defined this, right? So we're going to use this throughout the lectures. So xi, yi, this is the i-th example. So we have this data set with n examples, and I'm looking at the likelihood of all of these examples under the parameter phi, mu0, mu1, and sigma. So for every parameter, you have a likelihood. And this likelihood can have-- This sounds kind of complicated. But actually, you can somewhat simplify it a little bit because these examples are independent. So you are assumed that your data are drawn independently from-- each examples are drawn independently. So then you can factorize this. So this is the product of the likelihood of all the examples. Because you used the independence. The p of xi, yi given. And you can even factorize this a little bit more to say that you can use the table to get p of xi given yi and the parameter the parameters. And times p of yi given the parameters, No, that's not all. Everything depends on everything else. I don't think I have a different color. So here, you can have some simplification. Because yi, the distribution of y, only depends on phi. The mu1, mu2, mu0, mu1, and sigma are describing the conditional probability. So y only depends on phi. So you don't have to write here these things. But because there is no such dependencies. And also the same thing, xi condition yi only depends on mu0, mu1, sigma. It doesn't depend on phi, so you don't have to necessarily phi here. Even though you write it, is the same. OK. So this is the so-called maximum likelihood. And what you do is you want to say, I'm going to maximize this L, phi, mu0, mu1, and sigma. So basically, you say I'm going to maximize. So the learned parameters will be the maximizer of this likelihood function. You want to find a parameter such that the likelihood is the largest. So this is the so-called maximum likelihood in our context. Why are you making thes dependencies like-- is it possible that there is some kind of dependence between the data and in that case what do we do? All right. So, why-- the question is, why are you making these independent assumptions? I think in short, if I have a very short answer, I think this is almost like always assumed that in almost certain settings, even in the most advanced settings. And the reason is that there are I think there are multiple reasons. You can say this is-- in some sense-- one thing that you can imagine is that you do collect data somewhat independently from a very large pool. That's probably the simplest way to say it. Of course, there are cases that this independence is not true. For example, if you have interactions. For example, suppose I first get some data from you, and then I do something, and then I get some other data from you, and maybe these data are no longer independent. Or maybe the second time you provide me data, you also look at the first time. Sometimes there is something about this dependencies. Especially in reinforcement learning like where you have interactions. So in those cases, we will drop this kind of independence assumption. But in most of the cases, we do assume the data are sampled from a large pool independently. [INAUDIBLE] Phi is a scalar. Yes, you are right. And the phi is a scalar and it's also a scalar in 0. That's a good question. [INAUDIBLE] Do you have the freedom to change phi? Yeah. We are-- also phi is a parameter, right? So you are going to learn phi. You're going to learn what is the right phi from data. So how do you learn phi? So you are going to find out the maximizer of this. And the maximizer will be like-- When we talk, for example, some training that data. If you could just phi equal to cases where y is equal to 1. But both [INAUDIBLE]. You are exactly right. You are ahead of it. But we are going to prove that. We are going to show that that's actually the solution. So there's a reason for that. You have the very good intuition. The phi is pretty much the proportion of the positive examples. The proportion of positive examples. But we are going to actually show that's actually the case if you use this methodology. Yeah. Can you just repeat in maximum likelihood or in maximizing the chance of the data being represented? So the maximum likelihood is you are maximizing the chance of the data given the parameters. So the parameters are just the sum. You look at all possible parameters and see which one which parameter can give the most likelihood. So this is the maximum likelihood. This is the methodology we're going to use for generative learning algorithms. Not only today's algorithm, but also for next lecture where we have other settings. We still maximize the likelihood. And just to compare this with the discriminative learning algorithm, there we also use maximum likelihood. And the meaning of the phrase maximum likelihood could be a little bit different. So here, you are maximizing the so-called the joint density of both x and y. You are maximizing the probability to see the pairs of x and y. But for discriminative algorithms. So for discriminative algorithm. So what you do that is that you are maximizing the so-called conditional likelihood. Even though many cases, people just drop the word conditional when it's clear from the context. The conditional likelihood is this probability of seeing the family of labels conditioned on the inputs and the parameter theta. And you can also factorize this. You can factorize this as-- so here, I'm using theta as a generic parameter just to be abstract because I'm talking about the abstract setting. So you can think of this data as the linear model family. And you can still factorize this. You can factorize this into using independence. But whatever you do, you always condition x. So x is considered to be, in some sense, a deterministic quality you observe. You don't know how x is generated. We don't care about how x is generated. You just care about how y is generated conditioned on you see x. And part of the reason why you do this is that you only model y given x. You didn't model what's the distribution of x. So there's actually no way you can do this maximum likelihood above in a discriminative sense. Because the only thing you model is y given x. That's the only quantity that you have for parameterized form for. So you just go with whatever you have subsets. And it turns out that these two are indeed different. There are some relationship and there are differences which I'll discuss after I introduce some more examples I think. Any questions so far? So this is what we've been doing so far. This is what we-- yes. In the last weeks. Exactly. I think this is probably the best discussed after I give some examples of the concrete examples. But in some sense, you can see the differences between these two kind of algorithms. And the differences between these two type of assumptions is that here you have more assumptions on all the data. You are making some assumptions on both x and y. And here, you're only making assumptions on y given x. So it's really about how-- the differences will be about how many assumptions you impose on the structure of your data. How much-- like in some sense, there is a whole universe. So you cannot model everything. So you choose some part of the quantities you can model. And the decision here is you model both x and y. And decision here is you only model part of it. And that will cause some differences in certain cases. So can we learn [INAUDIBLE] assumptions about distributions of our features? So with that [INAUDIBLE]? That's a great question. So the question was that, here, for the generative algorithms, we have the generative learning algorithms. We have assumptions on the features, right? Would that be a problem if you generalize to other examples. So it depends. It depends on whether your assumptions are correct or not in some sense. But if your assumptions on the features is kind of Gaussian, then actually it would provide you more generalization because your assumptions are correct and they impose structures. And when your assumption is wrong, then it would cause problems. So actually, this is some-- yeah. This is basically like the main difference, right? For different algorithms, sometimes you can have-- as I said, you have a lot of variables in this world, right? So you probably can even pick some other like-- so basically, you have-- you can try to model a lot of mechanisms or you can try to only model a part of it. And what's the decision often is a tradeoff. If you model too much, then you are risking to model them incorrectly. And if you model too less, then you don't have enough-- like, you don't leverage enough prior knowledge in subsets, right? Like, if for example, if you really know this is a Gaussian, you probably should just leverage the prior knowledge. But you may be wrong so it depends on the cases. Yeah, this is a great question. And I'll discuss a little bit more about this in a more mathematical level. Will we use the Gaussian? Because given the mean and variance and you don't know anything, is this the best way to model something in real life? Yeah. I hear the question is, given the mean and covariance, if you don't know anything else, is the Gaussian the best? Is that question? Yes. I think that's a great question. So typically, you are basically right. If you don't know anything else, then you probably should just model them like Gaussian if you only know mean and covariance. But on the other hand, to be fair, you know more than the mean and covariance. So for example, if you really want, you can compute the third other correlations between these data points, between these coordinates. So in theory, you can also-- because you have so many data-- if you have a lot of data, you probably can model other higher moments of the data. I'm not sure of the definition of moments. You look at the higher correlations between the coordinates of the data if you have a lot of data. But Gaussian is a pretty reasonable assumption and it's still used very often. Sometimes people use transformations of Gaussian. So here, we assume they are Gaussian. Sometimes people use like, you can transform the Gaussian in certain ways. But Gaussian is a pretty reasonable assumption. [INAUDIBLE] the y is a continuous value then [INAUDIBLE]? So how do I do the case when y is continuous? Is that the question? Yes. So we don't cover that. But pretty much, you follow the same methodology. You are going to have a different prior or different distribution for y, p of y, right? So maybe you can model p of y by say Gaussian again, if you want or-- [INAUDIBLE] You're going to have more variables to describe the distribution. The accuracy will be [INAUDIBLE]?? The accuracy? Yes, the accuracy [INAUDIBLE]? Oh. So I guess the question-- So-- in short, whether y is discrete or continuous in some sense is mostly decided by our data set, right? So I think typically if your data set is really discrete. So if you really just have benign cancer or not, you probably don't want to make it continuous for the same reason as you said. Why do you want to make it more complicated, right? So you have more parameters to learn, right? But sometimes, it's just like your y is really continuous. There's no way you can discreteize them in a meaningful way. And also, to be fair, the parameter to model y is often much smaller. You have a much smaller number of parameters to model y than the number of parameters to model x. For example here if you count, so you only need one parameter model y. So even if this is continuous, you only need one real number to model y. Problem, if you have say for example y is a Gaussian distribution, but it's one dimensional, right? So you only need one parameter. But for x, it's a high dimensional thing so you have to always use more parameters. So typically, it's not a big issue. [INAUDIBLE] you began [INAUDIBLE] two distributions, one for when y equals 2, another one for [INAUDIBLE].. But let's say you have more features, then it's kind of harder to visualize the data. How do you make this judgment of how many distributions to have on your parameter? Yeah, I think that's a great question. How do you make this judgment? How many kind of distributions, right? So the easiest to answer is that you always use two as long as you have two labels. If you just have two types of things you want to classify, you just always have two different distributions, two Gaussians. Of course, you may want to go more advanced. To say, even for the benign cancer, right? It's not like really-- really like all the benign cancers are the same, right? Maybe there are two sub-populations, right? So it does probably require a little bit of domain knowledge, or maybe your trials and errors. You could try to generalize this. OK. So let me proceed with the-- OK. So I discussed a lot of methodologies so far. So now my next goal is how do I maximize this? How do I maximize the likelihood? So you need to be able to do this empirically. So to get the parameters. So this is about computation, partly. So what do we do is we-- one choice is that you just write down this function in your computer and you write some optimization algorithm. But you only do a little more than that in math because that would simplify your implementation, right? So we're going to simplify this formula so that we can-- and actually we're going to do a lot of math so that you don't really need a laptop or your computer to compute the parameters to run optimization algorithm for this, right? So what I'm going to do is that-- so the first thing is that we know that if you do a argmax. So we care about the argmax, right? The maximizer of this thing. So the maximizer is the same if you transform your loss function with any continuous multiple functions. So even if you add a log, the maximizer would be the same. And for the purpose of this course, we define this using a little l. This is the log likelihood. OK. While we are doing this, it sounds like we just introduce something even more-- more symbols. The reason is that this will make the product to itself. So log of this product. Log of a bunch of productive bunch of terms will be equal to the sum of the log of each of the terms. So this will be sum of the log of these two terms. And the log of the product of these two terms will be the sum of the log of each of them. So the log of p of xi given yi plus the log of pyi given-- I guess I don't have the right phi here for this purpose. Right. So everything becomes a sum and that's very important actually. Because even you do this numerically, it's very important to take the log. Because all of these numbers, if you do it empirically, you will see like they are either very small or very big. Just because-- recall that we have Gaussian distribution. There's an exponential here. So it's pretty easy to see the density function. It's pretty easy to be either very big or very small. They're not on the best scale like you can imagine. But if you take a log of it, the scale will be much nicer. So the log of the density will be like some reasonable kind of scaling so that you can numerically use. And also it becomes a sum so you don't have to do the product. And then, how do we proceed? One option is that, again, you can still do the numerical stuff. You can do a optimization algorithm to get the minimizer. But here, we can actually analytically compute the maximizer. So the maximizer here, how do you compute it? What you do is you-- I'm going to continue here. So how do you find the maximizer? So there's a small fact. I guess, probably you learn from the calculus class. So if theta is a maximizer, then that means that of some function of theta-- I'm being abstract here. So then it means that the gradient of the function at theta evaluate theta is equal to 0. So if you are on the maximizer, you have to satisfy the gradient 0. And actually in many cases, if this f is convex, then this is if and only if. If f is not convex, this is still a necessary condition. So for us, it's actually convex so it's a necessary and sufficient condition. But for other cases, this might be just only a necessary condition. So because you have this, then you can solve the equation. So you can try to solve the equation. I'm going back to-- this is a small abstract fact. Now I'm going back to this case. So basically you say, the gradient of the loss with back to all the parameters should be 0, right? Like, with respect to phi, mu1, mu2, and sigma should all be 0. So basically you just have to say, the gradient is zero. And this really just means that the partial derivative is back to each of the parameter is 0. So now you have four equations. And you can try to find the solutions for these equations. And this is I think homework 1 q1d. So in that homework, we're asking you to-- first of all, you have to compute what is each of these is, right? So you have to have an analytical formula for each of this, right? So what is the derivative of l with respect to phi. You have to do some calculation to see what's the derivative. You have to plug in all of our definition of this p, these two p's. And you get the loss function as a function of phi, and then you derive the derivative with respect to phi, and then write out that this is 0, and you solve the equations. So that's homework q1d. And this is sort of a complicated to some extent but not-- it's still manageable. There are even more complicated things than this in machine learning. But at the first time, it would be a little bit complicated because all of this has a little bit of complex formulas. And what I'm going to do next is that I'm going to tell you what's the solution of this directly. So you know the answer of the homework question. And I'm going to proceed-- I'm going to interpret why the solution makes some sense. You will see the solutions actually make a lot of sense intuitively. And I'm going to proceed with that. [INAUDIBLE] is it, like wouldn't mu 0 be 1, [INAUDIBLE] switch to mu1, mu2. Oh, my bad. Mu1. Did I switch to that? Oh, my bad. This is mu0. This is mu1. My bad. Thank you. And only the first one is a scalar, right? The other one is already the vector [INAUDIBLE]?? So what's the dimensions of this? Yeah, that's a wonderful question. So I think often this is a confusion that is pretty often. Like, just because sometimes in math they have different things. So in this class, the derivative with respect to the parameter, any parameter, we have the same shape as the parameter. So this is a one-dimensional parameter. So that means that this is a scalar. This derivative is a scalar. And this is the d-dimensional vector, meaning 0. So that means that derivative is rd. And this is rd by d. [INAUDIBLE] scalar. The other 0's are like [INAUDIBLE] vector are the ones [INAUDIBLE]. Right. So this is a 0 as a vector. Thank you. OK. Cool. So I need to erase something I guess. So what other solutions. So the solutions are-- OK. I'm going to first define some notations. So let U0 to be all the examples that are positive. We said that index for the positive examples. Indices positive. And-- wait, my bad. This is U1. U0 is the indices of negative examples. So under the MLE solution, the solution will be the following. So phi is equal to U1 over n. Where n is the total number of examples. Which is equal to U0 plus U1. So, what is this? This is really just the fraction of positive examples. So this is the phi you learn. So phi is supposed to be the probability of y is 1, right? That's your modeling choice. And it turns out that if you learn it from data, it will be exactly the fraction of positive examples in the data. So this is the most likely phi that can join with your data. Which is exactly the same fraction as in the empirical data. And of course, 1 minus v will be the fraction of negative examples in the data. Question. A little bit bigger. OK. Thanks. How do I remember this? I have to burn this in my head. So 1 minus phi is equal to-- this is the fraction of active examples. [INAUDIBLE],, it's just, I can't see it from here. You said 1 is equal to what? Mu sub 1. Mu sub 1 is a set. So the set contains all the indices. Such that y1 is equal to 1. So this is a indices of positive examples. So for example, suppose you have here, right? So this will be the set U1. This will be the set U0. And how do you decide what is phi? The maximum likelihood of phi will be just you count how many examples in total. maybe 10 examples. And you say, four of them are positive. So that's why phi is going to be-- so in this case phi will be 4 over the total number of examples, which is 10.4. For some reason, I'm going to write this phi as the follows. I'm going to write phi also just to-- this is mostly just for mathematical cosmetically. You want it to make a look a little bit nicer in some sense or more consistent with the other equations I'm going to write next. So you can also write this as the following. Let me expand this notation. So this is so-called an indicator function. So indicator function. I'm going to write it as 1 of E. I think in the homework, we write it like this. Different people have different types of brackets but it's the same as honestly defined. So 1 E, this is equals to-- this is the so-called indicator function for the event E. So it's equal to 1 if E happens. And it equals to 0 otherwise. So let me check whether this-- so in this case, where so this indicator of y goes to 1 just means that if y is equal to 1, is equal to 1. This indicator of that is equal to 1. Otherwise, it's going to be equal to 0. So basically, this indicator is only 1 when the label is positive. And I'm taking the sum over all examples. So that's why I'm basically counting how many examples satisfy yi is equal to 1. So that's why this whole thing is just equals to U1. It's probably useful to understand this notation because I'm going to have a little bit more complicated formulas than this. So this is a warmup in some sense. Go. [INAUDIBLE] This is just a-- I guess-- in my mind, this is capital U. Which is not the Greek letter mu. I guess the way we do it. The handwriting is not that obvious. Yeah. But they are they're complete differences. This is a set, another parameter. No relationship at all. [INAUDIBLE] Why do I? Sorry, I didn't hear. [INAUDIBLE] Oh. Let's-- oh, I see. I see. So that's a good question. Yeah. Thanks for that. So the absolute value is-- maybe I should define this. So this is the site. So when you have a set, I'm using this as the size or the cardinality of the site. Like, how many items are in the set. That's just a notation. Yeah. Yeah, that's a good catch. Maybe I should take a note on this. I was asked of this the last time as well. OK. Cool. So I'm going to continue with the telling you the solution of this MLE. Mu0, this is mu. Not the U. This is the MLE for the parameter mu0. This is equals to 1 over. This is U0, the number of negative examples, times the sum of all the xi's in the set U0. So this is the sum of all the positive examples and I'm taking some of the input vectors, the feature vectors xy. So basically, this is just the average, the empirical average, of xi's off on inactive xi's. So I'm looking at all the negative examples. I'm going to take the empirical average of the xi's. And that turns out to be the best estimate for the means of that class. Empirical average? Does the word empirical mean different types of average? Oh. I see. I see. Yeah. Good question. So it doesn't mean anything. Empirical average is the same as average. Yeah. There is-- just think of it as the average. There is some reason why I use that just because in some other cases, sometimes you-- don't worry about it. Sorry. OK. So I guess these are the results you would get off of solving [INAUDIBLE]. Right. Right. Exactly. Exactly. And now, I'm just telling you the answer. But this sounds very intuitive, right? So what would you guess what's the best meaning for this class? Probably you should just use the average of all the examples. I would. At least you know. If you see it, it sounds somewhat reasonable. And you can guess, just because these are symmetric from the mu1. It's the same thing. You're going to have 1 over the size of the positive examples times xi times the sum of the-- right. So this is the average of positive xi's. We're going to write this as-- so, I'm going to use this indicator function to write them in a slightly different way. So I'm continuing here. So if you look at this formula, you can write this as U0. The size is equal to, as we argued, the indicator of yi is equal to 0. This is the number of examples where yi is equal to 0. Because y is equal 0 means the indicator is equal to 1. That's what the indicator is saying. Indicator saying is, if the indicator is y only if the event is happening. So that's why this is one where y is zero. So that's why this denominator is the same as the size of the negative examples. And then, the numerator can be written as is equal zero. So you first have this indicator only selecting those examples that are negative and then you take multiple xi. And for the second part. For the mu1, it's the same. You just replace 0 by 1. So you have. And you have. You select all the positive examples and you multiply xi. So why I'm writing it like this? One reason is that it looks a little bit more systematic. I'm not sure whether you agree with that and maybe you don't. Another thing is that I think you see this kind of formulas in print often for other cases as well. So it's probably good to unify them in some ways in some sense. But you don't have to remember any of this. I think this way is the best way to remember them and kind of interpret them. So just treat this as cosmetic changes. Some cosmetic changes of the former. Next, I'm going to have sigma. So the solution for the MLE for sigma is like this. It may sound a little bit complicated. OK. So let me try to-- What is this mu? This mu is the mu we have completed above. So you have to use the mu you complete above to complete the sigma. So these are the mus you computed above. A mu yi could be mu1 or mu0 depending on what's y and what's y. And one way to interpret this is that you just look at can expand the sum into two cases. One case is that y is 0. And the other case is y is 1. So when y is 0, so you have-- so those are the i's that are in the set U0. And this is xi minus mu0 times xi minus mu0 transpose. And then you have. You look at those cases where yi is equal to 1. And then this mu yi becomes mu1 and you've got this. So this makes it a little bit easier to interpret. Because this-- if you compare this with-- this is kind of the covariance. But the covariance evaluated on the empirical data. So this is the-- oh, sorry on the data set you have. Empirical. Empirical is a word that's used to stress that you are seeing the data as a sample data. So that's-- I kept using that. But we don't have to use that word. So this is the covariance. Covariance of xi's for those xi's in the site U0 for those negative examples. So this is the covariance often active examples. And this is the covariance of the positive examples. And it turns out that the average of them, in this sense, is the best gas for sigma. It's the sigma that gives you the maximum likelihood. So I've got all of this parameters so far, right? And now, you're going to ask to prove all of this are true. But suppose we already got all of these parameters, we can compute them in numerically by plugging this formula because you just plug in all the xi's. You have all of the data. You plug in. You get all of these parameters. So you've got all the parameters. So that's the so-called learning process. You learn the parameters. And now, the next question is, how do you make predictions on a new example, right? You got the parameters, how do you make predictions? [INAUDIBLE] did you assume sigma is different for y plus 0 and y plus 1? Do you think the average of covariants [INAUDIBLE]?? So if you assume they are different, I don't think the formula will be this. And actually, if they are different, you don't even have an analytical form for the solution of the analogy. You cannot solve it analytically. So here, it's kind of like, for some reason, because we are making all of this simplifying assumption, you can solve the maximizer of the MLE. But actually, it's not always the case. You can write analytically. And when sigma are different for the two sub-population, you don't have that analytical solution. So now we are talking about prediction. Given x, you want to output some y. You want to understand what it is that benign cancer does. That's your final goal. That's the final goal of the [INAUDIBLE].. And the way that we do it is that you say, I'm going to output the most likely one. So I'm going to output argmax. The maximizer of p of y given x. And the parameter phi, mu0, mu1, and sigma. And note here that these are the solutions of the MLE. So these are not arbitrary parameters. So in some sense, you can even say I'm abusing notation a little bit just for simplicity. So here, this phi, mu1, mu0, sigma are those solutions that are computed from this formula. Go ahead. [INAUDIBLE] In where? In which case? In here? [INAUDIBLE] This is a matrix. This is a vector. This is a vector. Mu0 is a vector. Mu0 is the mean of the Gaussian. It's a D dimensional vector. So did I say something here? I guess that you raised it, right? So you assume y given x and y is from y maybe zero. This is from Gaussian with mu0 and sigma. So mu 0 is a D dimensional vector. Sigma is a matrix. [INAUDIBLE] No, that's phi. Phi is a scalar. It's the probability of y is equal to 1. And mu0 is the mean of the-- mu0 is the mean of x given y 0. And mu1 is the mean of x give y is 1. [INAUDIBLE] Yep. OK. Cool. Back to here. So this is my methodology. I'm going to take the MLE. So how do I compute this? So it turns out that this-- of course, I have to use the Bayes' rule to get y given x. Because I only know x given y. I only know y. But I don't know what is y given x. So one thing is I have to use Bayes' rule. So let me do the Bayes' rule for you and it's actually simpler than you may think. So because here you are maximizing over y, right? You are trying to output which y is more likely, right? Well, it's more likely to be benign cancer or not. So basically, this maximization problem we just have two choices. We are just maximizing over 2 possible choices. So you are just taking the argmax of the two quantities. The two scalars. Yeah, both possibilities. And this one. So you just care about these two scalars and which one is bigger. And it turns out that these two scalars, their sum is 1. Because given x, y can only be 0 or 1. So maybe let's suppose just for the sake of simplicity. So suppose you call this a and you call this p. Then a plus b is equal to 1. And you care about which one is bigger. Whether the A's bigger or B is bigger. So if you have A plus B equals 1 and you are taking max of A and B, then what does this really mean? It really means that you are asking whether A is bigger than Because you're going to choose A if A is bigger than 0.5. Because if A is bigger than 0.5, that means B is less than 0.5. So that's why you choose A. And you only choose B-- maybe let's write this again. Sorry. If A is bigger than 0.5, because that means B is less than 0.5. And it's going to be equals B if A is less than 0.5. Because that also implies B is bigger than 0.5. So basically, the question is that you just care about whether A is bigger 0.5 or not. So going back to this. I'm doing an abstract thing. So if you're going back to this, then it really just means that this argmax is equals to 1. Y is equal to 1 if the probability of y is equal to 1 given x and the parameters is bigger than 0.5. Sorry, my-- there's a little bit-- and limited space. And it's 0 if p of y given y is 1 given x, the parameter is less than 0.5. Which also makes sense because y-- Oh, my bad. Right. So this also makes sense. Because basically this is saying that if the probability of y is 1 is larger, then you choose y. You choose 1. Otherwise, you choose 0. That's it. I just mathematically derived that for you. That's it. And if you look at this figure, so what's the final decision? What's the final kind of boundary between these two cases? This one will be the family of x such that this p of y equals to y given x is equal to exactly 0.5. So if you define this family of x's. This is a family of x's such that the y given x is equal to exactly 0.5. And this is called the decision boundary. On one side of the boundary, y1 is more likely. On the other side of the boundary, y0 is more likely. And the boundary, you just do some arbitrary type grouping or you just randomly output. The boundary wouldn't be very likely to show up. It's very unlikely that your point will be exactly on the boundary. So it doesn't matter that much. So maybe let me just-- maybe let me just-- quickly. Because we only have two minutes left, let me just quickly say what this decision boundary is for the Gaussian discriminant analysis. Because here, what I'm doing here is pretty general in sometimes. You can see, right? I didn't really talk about what exactly the parameters are. And if you really want to know it know what this p of y given x is but you have to plug in the parameters, you have to use the modeling, right? For Gaussian discriminant analysis, if you plug in, so what you do is you're following. So you do p of y is equal to y given x. This is the thing you really care about. So you use Bayes' rule. So you say that this is equal to p of x. y is 1. Here, you only depend on mu1 sigma and you'd have p of y is one given phi. And you divide this by p of x. The probability of x. This is the Bayes' rule that we kind of alluded to in the beginning of the lecture. And then you do a lot of calculation. Which I think this is homework. You do a lot of calculation. And what you'll find out is that actually this has a relatively simple form. So the form looks like 1 over transpose x plus theta 0. Where theta is something of rd. Says 0 is a scalar. So they are functions. They depend on-- let me just simply say, these are depend on the phi, mu0, mu1, and sigma. So basically, eventually, you get all of these parameters and then you use this parameter to compute theta and theta 0 and then you get y given x. And that's your probability. And then, you're on the computer decision boundary. Let me also do that real quick. Sorry, we're running a little bit late. So then, you got a decision boundary. So what's the decision boundary? The decision boundary is when this is equal to 0.5. And when this is equal to 0.5 is when this exponential is equal to 1. Then you have 1 over 2 is 0.5. So when the exponential is equal to 1, this means theta transpose x-- I think. Sorry. Let me see. I think I need to have a parenthesis here. But only because I'm abstract here, it doesn't really matter that much. It means this is equal to 0. And you will see that p of y is equal to 1 given x is larger than 0.5 is the same as theta transpose x plus theta 0 is larger than maybe you say larger than 0. So basically, you have a linear function of x. That's our decision boundary. That's why I keep drawing a line here. From here, from the principle, you never know why this is a line, right? The principle says that maybe this is some formula of x. Let's separate this. But the derivation tells us that at least for this case, the decision boundary is a linear function of x. [INAUDIBLE] vector [INAUDIBLE]? So I think I'm going to have this. Sorry. This is a typo here. I should have the parentheses. Yeah. I know with the signs are not that important. I think-- yeah, we are-- maybe you can see the best way you can come to me to ask the questions. I can stay here. Is that the best way? Maybe. Yeah. OK. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Supervised_learning_setup_LMS_I_2022_I_Lecture_2.txt | So hello. Welcome to 229. So we're starting a block of three lectures that I get the privilege of spending some time with you and kind of walking you through the building blocks and basics. Before I get into the plan for those three lectures, I want to make sure we understand a couple of logistics. So I posted something on Ed that kind of explained why I was setting up lecture in the way I am. You are not obligated to read that. But if you're interested, go ahead and read it, super happy to take feedback and discuss any of that. One of the things that I liked about the pandemic was that more people were asking questions during class. And I think part of that was because people were using the anonymous feature on Zoom quite a bit, and I wish we still had that. We don't in this class for various reasons. So what we're going to do instead is we're going to have this Ed thread that I just set up that says lecture 2, And feel free to fire away questions on there. I may not take all of them. I reserve the right to skip them. TAs may jump in and answer some and I'll try to follow up on anything that's there. But it's really helpful to me that you ask questions and happy to talk about whatever you want, really. Maybe relevant to the class is helpful, but pretty much whatever you want. Second thing, there are a couple of downloads that I put up before my lectures. I put up two things. One is a handwritten note of what I'm going to talk about, which are the same notes that I use. I modified them a little bit and then also a template in case you want to follow along. Again, you don't need any of this stuff. You can just sit, watch it on video, watch it here, ask questions, do whatever you want. But it's just so that you know the material that's there and that things like data that I want to show you and look real. I can cut and paste that in, and you can have it in front of you while I go through it, OK? All right, so that's the logistics I will use. I'm going to try and use the iPad. I like using the whiteboard feel. So this is a good compromise because it slows me down. If I get excited, I'll start talking all kinds of nonsense. So this will focus me a little bit more on the class, and you'll see how long I last, all right? So what we're going to do in this first three sections of the class, first three lectures is kind of build up increasingly sophisticated machine learning models. And what you're going to see is that they are very, very similar to a model that you probably already know and love, which is linear regression. If you don't know linear regression, don't worry. Today's lecture is effectively going to be talking about linear regression with slightly fancier notation and some little bits around the algorithm, but it's basically just fitting a line, OK? It's really hopefully going to be something that you've seen and you can grab on to. And then what we'll do in the next lecture is we'll generalize this from regression, which is the kind of traditional fitting a line to classification, and that'll have a couple of twists. We choose our notation a little bit carefully. And what that allows us to do is show that that way, that we're looking at classification. And we'll talk about what classification really is, allows us to do a much larger class of models, which are called these exponential family of models. And they're going to kind of rear their head throughout the course. So we're going to see a precise definition that allows us to have a huge number of statistical models and kind of treat them in one way. So we don't have to understand the details of every little model. We have an abstraction of how to find its parameters, how to do inference on it, let's get a prediction out of it and kind of understand it and understand these algorithms. I'll try to highlight for you as we go through there which of these pieces actually carry over to what I would call kind of modern and industrial machine learning. Feel free to ask questions. Effectively, the way we solve these algorithms or we solve these underlying optimization problems is exactly the way we run everything from how images are detected to how search works in various different corners of it, to natural language processing to translation. Weirdly enough, this abstraction kind of carries over for all of that. And the underlying workhorse algorithm, which we'll see is called stochastic gradient descent. And so we'll try and introduce it in that absolutely simplest setting, OK? And so that's the idea. It's going to be building parallel structure for the next kind of three, so linear regression, classification, and then we're going to go through this generalized exponential family. And they will have a very parallel structure. If you go back to the notes, you'll be able to pull out oh, this is the solving part. This is the model part and what we're going to do there. All right, then Tengyu takes over, teaches you a bunch of awesome stuff, neural nets, all the rest of that stuff, kernels. Then I come back and teach you unsupervised, and there again, is a different structure there. But it's very, very similar and graphical models and the rest make an appearance there, OK? So today our plan is to get through first to some very basic definitions. We'll be a little bit pedantic there. But that doesn't mean you shouldn't ask questions, means if you don't understand something, you should, and I haven't done my job. So just fire off a question in any form you like. Then we're going to talk about linear regression, which as I said, is fitting a line, except where we fitting high dimensional lines eventually. So we're going to want to abstract that away. We'll talk about batch and stochastic gradient descent, which are two algorithms in machine learning as Tengyu talked about. We're not great with terminology. This algorithm was called incremental gradient descent in the '60s. It's been around forever. Our incremental gradient methods actually wasn't-- even it's not even a descent method formally, doesn't matter. The point is these are old things that people have been using for a long time. And weirdly enough, it's what we use every day. It's as I said, this is like a workhorse algorithm that you're going to see. And then I'll very briefly cover the normal equation because I think it's a curse on your homeworks and also give you some practice with vector derivatives. So you do need to know the vector derivatives stuff to make your life easier in this class. You'll have to compute, occasionally compute a gradient or computer derivative. And this is a place where you kind of know what the right answer is, so when you compute these derivatives, it's an easy place to check yourself. But I wouldn't say that normal equations are the most important thing you'll learn in this class. It's just solid. You should know what they are. It's not hard. OK, all right, great, so let's talk about supervised learning. All right so this next section as I mentioned, is going to be all supervised learning. And it'll all follow kind of the same general schema, right? And what I mean is we're going to try and have some what we call prediction function. And basically, all that's going to be is a function h, which we'll use this notation consistently that goes from some set x to some set y, OK? Before defining this formula, let me just give you a couple of examples. So one idea is that x could be the set, some set of images, all right? So we could look at images, at a bunch of images, and we could ask does it contain a cat, right? That was actually a very important machine learning problem at one point in time. People still work on it, right? What's the object that's in this image? That would be a prediction, right? Where your y's here would be a set of labels that say things like cat, dog, things like that, OK? It could also be text, right? So we could look at text here, and we could ask questions that maybe we arguably should do better on in machine learning like is it hate speech, right? And so we ask here, this x here, these are all examples of data types that we want to work on. And these are all labels or y's that we're talking about, OK? Now, we'll look at as Tengyu showed in his lecture, we'll look at house data. Now historically, house data has been one of the most common machine learning and statistical tasks. It's in every stats 101 course. So you may have seen this before. I kind of hope you have. And when we look through it, I'm going to point out the real data that you can use to try this out in a competition like Kaggle. There's a Kaggle where you can download house prices from Ames, Iowa and try and guess how much they should sell for, things like that. People actually make money on that, by the way, not everybody, sometimes hard if you follow the news, right? Zillow tried to sell houses and estimate them and flip them, and they lost a bunch of money. Blackstone, if you care about private equity, managed to make money doing that, right? They bought houses, and they were able to predict how much they were going to sell them at. So maybe trivial as it seems, these are actually problems that people care about. OK. Anyway, so we need an abstraction, so we have this x and we have this y. We need something else to make this a supervised problem. And we talked about it yesterday. We're given a training set, OK? So what is the training set? Well formally, it's just going to be a set of pairs. This is just introducing notation. You have an x1 and y1. OK now, comma all the way xn to y. Now, xi here is going to live in x. It's some encoding of an image. Maybe it's the bits that are in the image. That would be a reasonable encoding. Maybe it's RGB values that's in there. If it's text, maybe it's the ASCII characters or Unicode characters that are in there. It's some bag of bit, OK? Now, we're later going to abstract this away and almost always work in a vector space. We'll talk about where those vector spaces come from. But that's kind of where the data actually lives and y i is going to live in some set, and those are going to be our labels. Oops. All right, all right, so now our do, given that information, is we have to find a good h, x OK? Often, we call it h because it's a hypothesis. All right now, that notion of good is going to occupy a fair amount of what we worry about over the next couple of lectures. What does it mean to be good, right? In some intuitive sense, because I have these examples of x's and y's, one reasonable thing I should expect is I kind of get them more often right than random chance, right? That's kind of a very basic idea of what would be good. You show me an image. It has a cat in it. I get most of the cats right. Now, you've used enough machine learning to know we don't get it right all the time, right? And it's still useful. So we'll have statistical notions. We'll try to get it right kind of on average. Now, more advanced things, like just recency bias because Tatsu was talking about it in the class before on the board, you could also worry about how well you do on some groups versus other groups. Some groups, you know you're predicting really well on. But other groups have qualities, and you're not predicting as well on that. You could worry about that and say I want to do, my prediction I only care about as being as well as I do on any one of these predefined groups. So you can have multiple notions of good. We're going to stick with the simplest in basic, which is like how accurate am I at the task in this? But this mathematical framework can accommodate all of those. When you actually write it down, the tweaks that I just mentioned to come up with those different, what they're called loss functions is really, really kind of straightforward mathematically. They'll kind of go through the same thing. So all I want you to take away from this is we have a training set. That's what's provided to us. These yi's are going to be supervision. They're in some set. Our goal will be to find a good h among all the possible functions. And by the way, the class of functions from one space to another is enormous, right? So we're going to have to restrict that in some way. And that's kind of the setup for supervised learning. OK? All right now, this here, we will often refer to as the training set or the training data that's there. And what we're really interested by the way in which is probably a little bit counterintuitive the first time you hear it is, we're not doing strictly machine learners. We're not doing strictly what's called interpolation. We're not just trying to predict back on the x and y pairs that we have, we're going to try and worry about how well we're going to do on a new x and a new y. So why does that make sense? Imagine someone shows up with an image. Odds of that, they just took it with their, phone right? My phone is just littered with pictures of my daughters. So if I take a new picture of my daughter, and probably the label should be the same as the last 1,000 pictures I took. But it's going to look a little different, right. So when I show that picture, I don't care how well I did on the last picture that I took of her. I care how well I did on this picture, right, on those x and y pairs. And that's a little bit weird. And that means that implicitly what we're going to assume here is that these x's and y's you should think about is drawn from a large population of images that are out there. And we want to do not-- we're sampling some piece of it. And we want to do well on those images that are going to come in the future. That's why we think about it as a prediction. So it may not be great to just return the label of every x and y we've ever seen, right? We have to in some way kind of generalize, is the technical term to those new images. OK? All right, so the reason we call this a prediction is we care about new x's that are not in our training set. Right, now, if you look at that, and you're mathematically minded, you're like, how the heck do you say anything about that? And hopefully, you got a clue there. If it doesn't make sense yet, don't worry. We're going to make some assumption like we randomly sampled from all of the images and how well do I do on another randomly chosen image. OK? That's what we're going to do. In some way, the set you train on, though better be like the set that you evaluate on, that you take your predictions on, or you're out of luck. If you train your model on pictures of my daughter and ask to know about cars, I don't know how well it's going to do, right? So there's clearly some link here. Now weirdly enough, although I say that, one of the big trends in machine learning that's going on right now. And in fact the course that I co-taught with Tatsu and Percy last quarter was about these large models that we just trained to predict kind of everything that's on the web. And they seem to do pretty well on things, so just want to highlight there's a really strange notion of good. You spend your whole life trying to think what good is if you're a machine learner. OK, a couple more things, as I said, I'm just going to go off on tangents if no one stops me. All right, so if y is discrete, this is just terminology. So it's a discrete space. We think about this as classification. OK, that's the terminology. You can think the simplest version is yes or no. Does it contain a cat, yes or no, binary classification. You could also have a bunch of different classes. Is it a car, a plane, a truck? What model of car is it? Those are classifications. They are enumerated sets. The other thing which you're probably familiar with from calculus, and we'll talk a little bit about today is when y is continuous. And this is called regression, regression, OK. All right, so this is an example of something that's discrete, this cat. And the house price, this is going to be an example of regression. And that's what we're going to look at today. In lecture three, we switch, and we start to look at classification, which has some subtle differences. OK, awesome. All right, let's look at some data. Any questions about the setup or kind of higher level questions about what it is, what goes on here? All right, sounds good, OK. so let's look at some real data here. I'll try and get it all on the screen. So I'm going to look at this house price data. As I mentioned, this is the Ames data set, which follows a very famous data set just for historical reasons of Boston house prices that you can go look at and download. You can download it in one line into Pandas if you want, happy to put information online about how to do that. This is real data of real houses and Ames. And so what I'm showing here is these are their real IDs. I just randomly selected some to kind of make the picture pretty just be honest. And then here's their sale price, right? So this is their actual sale price in the data. And this is their lot area. This is kind of like some notion of square feet that's actually present. This data set, I think, has something like 93 columns inside of it. I've just selected a small set of them. We'll come back to that a second. Now, one of the things that I did here is the first thing you should do when you're encountering a new set of data, and I cannot emphasize this enough, is look at it. The number of times that people, especially engineers and industry take their data and start running fancy stuff on it I'm like, well, did you look? I still remember when I was running a machine learning team at an unnamed large company. And they were like, why are you sitting in the cafe just labeling data, just looking at data sets for days. It's like, I don't know what's going on. I want to figure out what's actually what people are actually doing on this data set, and it's really important, OK? So when you're doing your projects, first plot it. So here's a plot, right, x-axis square feet, y-axis price. And clearly, there's some general upward trajectory trend here. We're going to be more precise about that in the next slide, right? You get bigger houses, they cost more. Maybe as you can think about it, that's not quite true. If it's in a really desirable neighborhood, it costs more, and if it's in a less desirable neighborhood, maybe it cost less. So there are clearly other factors, those are going to be called features in a minute. But this is our first model, OK? So let's look at one other feature. So you can also look at the number of bedrooms, right? So you see here a plot. These are categorical values. That's why I put them in there. I mean, they're kind of continuous in some way. You can still treat them as numbers, so that's fine. And you see there's some spread among three bedrooms and among four bedrooms, and the price is the y-axis, right? OK, awesome, all right, so what would we want here, going back up for a second, what do we want, actually? We want to get a function. What's our hypothesis go from? It goes from lot area, and it predicts price. That's just notation. OK? This is what we're after. So you show me this data, and my goal is to produce some h, OK? Now, as I talked about, there are lots of functions that can take in a lot of areas and return sale prices. It could scramble it. It could do whatever it wanted. It could go look up from an oracle, whatever it wanted to do. There are tons and tons of functions. We're going to look at a simple restricted class of functions in just a second. But I just want to put that in your head. This is actually a pretty hard problem. So we need some representation for h, OK? So how do we represent that h? Now, we're going to look at a class of models, which is called linear, although if you're a stickler, you'll realize right away that they're affine. I'll explain why I allow myself to cheat like that in a second. OK, so here's a model that we could use. OK, x1, OK. so the idea here is you give me the variable, right, x1, which in this case would be the square footage of whatever you have. And then I will multiply it by some theta. And this theta is going to be a weight, we'll call it, or a parameter of the model. And this is how I'm going to form my regression, looks like a line, right? So far, so good, right? Now let me see if I can show you a line. There's a line that does it. OK? That's basically that line through the data that we just looked at, OK? Now, I want to actually come one more second. How does this actually map on to this? Oops, scroll down. Sorry for the bad scrolling. Here, I'm going to go to 0. Remember my h is going to look like x equals theta 0 plus theta 1 x1. Well, what does it look like just so you make sure the picture is clear. This here is theta 0, right? It's where I am. It's the response at 0. And then this gives me the slope, right? This is of slope theta 1. And then when I go to predict, what do I do? I grab a point. Let's grab this one. I project its value onto the x. And this is where I predict its price would be, right? This is the price of this one. Does that make sense? All right, awesome. OK, so this looks like a relatively simple model. But if you look at it at this scale, not so bad, honestly, right? There's some kind of linear trend there. There are some errors, or what we call residuals. In a second we'll try and minimize those, these errors. But this is like our first predictive model, OK? And as I said, it's something that you're hopefully quite familiar with just in fancier notation for the moment. All right, awesome. So now, I'm going to go, sorry for the skipping, I'm going to go and say, OK, how do we generalize this, right? So imagine we had our data set. We had x1, x2, so on. And we have a bunch of features. And I'm going to use my features from my notes, but hopefully this doesn't cause you any panic. I have size. I have bedroom. We have a lot size. And as I mentioned, in the actual real data set, there's like 80, 90 of these things, and I have price. And remember, price is my target. This is my y. And these are my x's. So this is, I'm just going to put numbers here. Don't worry about them. I don't know why I wrote these in my notes, but these are the ones I used just for the sake of consistency. So write this, 45k, 30k, 400, The thing that I care about is that this is my notation for the first data point and the second data point. And this is x1 1. This is x1 2. This is the second feature, OK? All right, now, I called this a linear model, right? But if you're a stickler, and you took a bunch of things, you're like, no, that's an affine model. You have this theta The way that we get around that is we're going to assume that theta is 0 for every model, x0 for every model is identically 1, OK? So that's just a convention. Don't stub your toe on it, that is xi 0 equals 1. And I claim you should convince yourself for one second that means that what is linear in this new set of features is my old affine models, right? I'm just putting a OK? All right, that allows me to just simplify my notation, OK? So what's the model the class of models that I'm looking at here? Well, they're linear models again with that terminology. And they're going to look at theta 0 times x 0, which we know is 1 plus theta 1 times x1 plus dot, dot, dot dot dot theta, going to call it d times xd. OK, and this equals sum j goes from 0 to d theta j times xj, all right. And remember, I'm just going to write it again, x0 equals 1. And nb means know well. All right, now, this allows me, now I have a very high dimensional problem. Now, high dimensions don't work like low dimensions. I won't go into a whole thing about it. But high dimensions are very fun and interesting spaces. You can build really interesting machine learning models by taking your data, doing what's called embedding it and then training a linear model on top. And that actually, in some areas, is actually state of the art of what we know how to do. So those models have potentially hundreds of features that are underneath the covers. For us, these features right now are going to be all human interpretable. They're going to come from the table. So when you give me the row x1, I fill in this value here with 2104. I fill in this the x2 value and so on and as I go. So I just fill in the values as I go. That's how I form my prediction, a little bit more notation. All right, now, if you don't remember, I'm just going to introduce vector notation here. These are column vectors. They're going to look like this. And this just going to save me time and space and fill with things. OK, x1 is going to be a vector Oop, sorry, sorry about that. I wanted to start at 0. x1 0, x1 1 and so on. And remember, this thing is 1. We've said many times, and this is whatever the value is up there 2104, OK? In general, this is going to be the size feature, the bedrooms feature, and so on, clear enough, right? These are the parameters. And these are the features. Right, so why be so pedantic about this piece? It's because we're going to use this in several different guises. These parameters are going to mean different things as we change the hypothesis function over time. And we just want to make sure the mapping is clear. So just make sure the mapping is super crystal clear in your head of how I take a data point that looks like this and map it into a feature vector that looks like that. That's all that I care that you get out of this. And then we have some different vectors. And yi is going to be the price in our example, his price. Now, recall this notion wasn't, we didn't pick this by accident. This was a training example. This pair, xi yi is a training example, all right? This is the i-th training example, just the i-th one in the set. OK. So far so good? Now, I'm going to create a matrix here capital X that's going to have one row for every example. So on X, so there are n of those characters in my notation. And so where does this matrix live? Well, they're n rows, and I recall because of my convention that I added a extra dimension, which I always made 1, it's d plus 1. And I'm just highlighting this and being pedantic because I don't want it to bite you when you realize why they have d plus 1? Where did it come from? It's the 1. And this is someone who said this you've taught this course many times, someone's going to get bitten by it. I'll say it many times, OK? It's uncomfortable when it happens. So this is now I can think about my training data as a matrix, awesome. OK, so now, we have a bunch of notation. I have basically bored you to death with 100 different ways to write down your data set. But I haven't answered the question that we actually cared about, which is, how do I find something that's good, right? How do I find an example of something that's good? All right, so now let's look at here. So why do we think this line is good? You remember this from how you fit it. You think it's good because it makes small errors, right? If it were all lying on the line, right on top of the line, the distance from any point to the line would be 0. And we think the line was pretty good if we could kind of minimize those errors, OK? And this is the error. This is the residual. Now, for computational reasons and historical reasons, we'll look at the squares of those residuals in just a second. Don't worry too much about that. You can do everything I'm telling you with the absolute value of the things, right? You don't want to do the signed value of them because what is a negative error mean, right? You should pay a penalty is the intuition whenever you make an error, OK? All right, so let's look at this. All right, so we're going to look at our h. And I'm now going to write it sub theta. j goes from 0 to d theta j of xj, OK? So now, picking a good model, I can actually make some sense for. What do I want? Well, I want somehow that h of theta x is approximately equal to the y when x and y are paired, right? If x and y come from a new example, you show me a new image. It has a cat or not that label may be opaque to me, but it exists. I want my prediction to be close to that y on average. Or for house prices, you give me a new house, I predict its price as close as possible. I may not get the exact dollar, but it should be penalized a lot if I'm off by $1 million maybe, but not if I'm off by $10, right? That's kind of the intuition here, right? So how do I write that down? The idea is I'm going to look at this function, J, which we're going to come to a couple of different times. And that one half is just normalization. I'm going to look at my data. And I'm going to say take my prediction on the i-th element, yi and square it, OK? Now, this is our first example of a cost function. And I wrote it in a really weird way. But I want to come back to why I'm doing it this way. This is also called least squares. You've probably seen this a bunch of times, and that's OK. And if not, don't worry. We'll go through it. There's nothing mysterious, OK? So let's unpack it. So this thing here is the prediction, says, you give me a point xi, what's my prediction on xi? Some y, and that says it should be close to whatever the training set said yi was. Remember what we're given? We're given xi and yi pairs that are together. Image, cat, house, information, all of its description and the price, we should be close, OK? We're penalized more for errors that are far away. I could give you a big song and dance about why this is appropriate. And indeed there are lots of statistical song and dances about it. But really, we're doing it because it's easy to compute everything that I'm going to do. You just want something that's kind of sensible, right? You should be penalized more the more wrong your guess is, right, roughly speaking in this example. Now, what does it mean to pick a good model? Well, our model is now determined solely by those theta j's, right? If we knew the theta j's, our model would be completely determined. That was the trick I pulled on you when I said, oh, how are we going to represent our hypothesis? We're going to represent it in this class. That means now, we reduced from all the crazy functions that you could have ever dreamed up, any computer program that you could ever have written that was functional to the class of functions that are represented by these weights. The wild thing is there's a lot of functions you can represent that way, OK? And we'll see that over the course of the class, especially when you start to get really high dimensions OK, cool. So which one am I going to pick? Yeah, please. So [INAUDIBLE] this least squared cost function, I'm understanding that we're cost function what is important to us. That's the gradient. So why do we need [INAUDIBLE] to constantly [INAUDIBLE] Awesome question. Yeah, very advanced question. So the question is hey, you wrote this one half there. It seems unnecessarily and potentially confusing. Why would you pay the cost to do it? And the reason is when I take the derivative in a minute, it will cancel out and make my life easier. But there's no-- the other point that you made is, and I love the way you said it. This is exactly right. We don't care. I wouldn't call it that we care only about the gradient, but we only care about the minimizer for the loss function. So if your loss function costs 10 or costs 100, doesn't matter. What you care about is what theta minimizes it, and you got that concept exactly right. So I hope that makes sense. When we're setting up the cost function in some ways, sometimes we give it an interpretation almost to debug it to understand what it's doing. But really, all we care about is what is the theta when we minimize over all the thetas of j theta, this is what we're solving for, right? So we basically want to solve this j theta. Now, as we'll see in a second, for linear functions, we can do this. For more complicated sets of functions, it's not always clear that there even exists a minimizer that we can reasonably find, right? So there could be these wild functions that take bumps and other things. I'll draw one for you in a minute when we talk about solving it. But for linear functions, what's amazing and why we teach the normal things, you can prove what h theta is in this example. Wonderful point, OK, but that's the central thing. We're going to set up these costs so that we get a model out. We've restricted the class of what we're looking at to something that's relatively small where we can fit the parameters. Then we just have to minimize. OK, awesome, right, this is what I mean by optimization, by the way, is solving this equation. I haven't told you how we're going to solve it yet, but hopefully this is good. Now, just for leading a little bit ahead for and also to kind of stall in case anyone wants to ask a question, what we're eventually going to do is we're going to replace this j with increasingly complicated potentially functions that we're going to look at, one for classification, one for other statistical models. But we're going to do almost everything that comes after this part to all of those models. So once we kind of get it in this form where it's like a prediction and some penalty for how poorly it's doing, we may use different cost functions. Everything that comes next we'll be able to do for all of them. That's why we set up all this kind of elaborate notation for fitting a line, right? It is still, by the way boggles my mind how much machine learning you can do by just fitting lines, just higher and higher dimensional lines. But we can talk about that some other time. OK, awesome. All right, so how are we going to solve this? Now, there are many ways to solve this. If you've taken a linear algebra course, you're like oh, I compute the normal equations, and then I'm done, least squares, or MATLAB or NumPy, and you're like oh, I do least squares solve, or whatever it is, backslash, whatever you want to do. We're going to solve it in a way that sets us up for the rest of machine learning. Because machine learning will deal in functions that aren't quite as nice as linear regression quite a bit. And in fact, the trend has been when I first got into machine learning in antiquity, we were all about what are called convex or bull shaped functions just roughly. And we were really obsessed, were we getting the right theta, right? We're like statisticians. At large scale, can we get the right theta? Is there one individual theta? Modern machine learning doesn't care. We don't even know if we get the right answer. We don't even know how. There was a paper I was reading from DeepMind thisz morning that was like, oh you should run your models longer. No one noticed, right? How do we not know when to run the models longer? We don't. That's the world we live in. So how does this work? So imagine to this is our cost function. OK? Now just as an aside, I want to say the linear function doesn't look like that. So don't think about. The linear function looks nice and bowl shaped, OK? The reason that's important, as I was just saying is a local minimum, this is a local minimum. So is this. So is this, roughly speaking, is global when you're convex. If that doesn't make sense to you, don't worry about it, OK? For convex, we'll come back to that point later in the course. But I just want to say don't think of this function I'm drawing here as what happens with least squares. We're just optimizing a J for right now, OK? All right, so how are we going to do it? We're going to use a very simple algorithm. We're going to start with a guess, which is going to be theta 0. How did we pick this guess? Felt good, randomly, set it to there are entire machine learning papers, by the way written, I've even written something which I'm not sure if I should be embarrassed or proud of, that talk about how you initialize various different parts of the model, OK? For us, though, it won't matter for least squares and some of the other models we're studying because we'll be able to get to the right solution. All right, so now imagine for the moment, I found you a model. I found you an initial model. Well, it's clearly from looking around, imagine I'm just looking, I'm the point. And I'm looking clearly, I, can go down from here, right? So the natural greedy heuristic is compute the gradient. What does the gradient look like here? It looks like this. Oops, I can make and do this fancier. You see that? Good. I compute the gradient, and then I walk downhill, sound good, all right, tells me to go downhill from here, right? Whatever shape I'm at, this gradient will also tell me what to do. Now, there are some problems, right, just as an aside, what if I were right here, oh, doesn't tell me what to do. Don't worry about that. It's a local maximum. I'd be toast. But here, it tells me to go downhill. Now, once I go downhill, how far do I go? Again, feels good, I pick a value. It's called a step size. So my next value is going to look like this. t plus 1 is going to be defined to be 5t minus sub alpha theta J theta t. Now, my notation is a little bit weird here. Imagine it's one dimensional for the second. OK, compute the gradient, go in the opposite direction. That's all that's going on. This thing here is called a learning rate. Embarrassingly, I think I have won awards for papers that are about learning rates. But they are not very well set. So you just kind of pick a value. For deep learning people now have all kinds of what they call adaptive optimizers if you look in the literature about how to set these values for you. You don't want to set it too big or too small. There is a theory about how to do it for linear things, but don't worry. For you, you just kind of pick a value. Just imagine what could go wrong? What happens if you pick it too big? Well then you kind of shoot off over here, right? You pick it too small, then you make little bumps like this, right? You don't make enough progress. It's not too hard to think about what should happen here. And then what happens? Well, I get a new point. This is my theta 1. And as suggestively done here, I iterate. I compute the gradient, and I bounce down, and then hopefully I get closer. Please. What's the denominator? Oh sorry. That is just-- this is my notation for the gradient with respect to theta. This is a partial derivative with respect to theta. So imagine it's one dimensional, and I'm just setting up for the fact that I'm going to use multiple dimensions. It's literally just the gradient with respect to theta, the derivative in this case. Now, what I'll do is I'll compute that J for all 0 to d characters. And that gives me my high dimensional role. OK. Please. Is this [INAUDIBLE] Yeah, so right now, I've just shown it, I've just shown J as an abstract function. I haven't decomposed it as a sum. That's a great point. Let's come back to that in one minute exactly what happens when we have a data point. It's going to be my next line. Other questions? Is it clear? So I did actually a fair amount of work there and tricked you just so you're clear. I went from one dimension to d plus 1 dimensions by just changing this the subindex and did them all by themselves so make sure that that sits OK with you, right? Please. [INAUDIBLE] gradient of chasing on graph. Yeah, so how can we understand it on a graph? What do you mean by on a graph? Like on this graph in particular? Awesome. Yeah, yeah. So just imagine the one-dimensional case carries what you need to deal with. So you're in a particular basis, right? Meaning you have theta1, theta 2. So imagine I'm standing in two-dimensional space. I can look down one axis, and then I have a one-dimensional function. Then I have a gradient there. That gives me the vector in this direction. Then imagine I turn 90 degrees orthogonally. I look 90 degrees there. I get another one-dimensional function. I compute its gradient. Now the gradient is actually, if you look at the derivative, it's actually all those vectors put together, one after the other in component. But that's exactly right. Yeah. So yeah, but you're asking exactly the right questions. So just picture it as the tangent to the curve, if that helps you in high dimensions. If not, don't. Yeah. Cool. Wonderful questions. OK, so what do I hope that you understand? Here's some rule. You have the intuition that what it's going to do is it's going to bounce slowly downhill. OK. Now if you start to think about high dimensions, and I think this is why the question came, starts to get a little weird. What does it mean in high dimensions? You can imagine something that looks like a saddle. If you know a saddle. Then you're like, oh gosh, what's going to happen when I get to the top of the saddle? Clearly I can go off the sides and get a little bit smaller. Right, that would be good. Maybe it goes down and stops. But I can get stuck on the top of the saddle, too. And weirdly enough, it's called a saddle point. Don't worry. OK? Sound good? All right. We're not worrying about convergence. Right, notice this algorithm has a very clear error mode. Here, we found what looks like the global minimum. But what if we started here? We go bounce, bounce, bounce, and we'd find this one. Now, how do you stop this algorithm? You stop the algorithm when this update becomes too small, OK? And you set that tolerance. Maybe you set it to what's called the machine precision, like 10 to the minus 6, 16. Or you set it to Or you want a quick solution. But the point is no matter what you do, you're going to get stuck here with a descent method. Because it's going to go downhill and get stuck here. And you're going to miss this much better solution. That won't happen for linear regression. We won't talk about why at this exact moment. We can prove it in a little bit. But for things that are bowl shaped, every local minimum is a global minimum. Then we're in good shape. That's why we cared so much about these things 10, We care about them occasionally now. Less than we used to. OK. All right, so let's compute some of those derivatives. Getting back to the earlier asked question, which was, hey, what does this mean for a sum? OK. All right. So remember RJ had a very particular form. So we're going to compute the partial derivative with respect to some sub j of j theta. OK, so this is the derivative here. Whoops. The derivative with respect to the j-th component. OK. Now, we take the sum. i goes from 1 to n. I'm going to put the 1/2 inside, because I can. And then this is linear. And we'll come back to what that means in one second. I just did a little bit of work here, not much. I just rewrote the definition of j, which is this sum. And then I took the partial derivative and I pushed it inside because it's linear. OK. And we should know that gradients are linear. OK. Now, when I do that, I get something actually fairly intuitive. And this makes my heart sing. Times partial derivative theta j h theta of x. OK? I canceled the 2 with the about the cooking show preparation. And that is standard, by the way. Now look what I have here, which is kind of nice. This thing is basically the error. But it's signed. Tells me which way I'm making a mistake. Kind of too high or too low, right? That's all that thing is. This is the misprediction. Or the error. OK? Now I have the derivative with respect to the underlying function class. Now why did I bother to write it out this way? Clearly I could have skipped a step of doing this and jumped right to the end. But this is going to be general for almost all the models we care about. That's why I did this. OK? So what is it in this specific situation? We'll recall h of theta of x was equal to theta0x0 plus theta1x1 plus theta2x2 plus-- computing the derivative of this is pretty easy. It's just-- oops. Theta j h theta of x is xj. Right? Please. The second line, while you have those scripts on the right. On the right. Superscript over x on the right. On the second line. Here? On the right. Regular. On the right here? Yes. Oh, this should have a superscript. Oh, I'm so sorry. Great catch. This is at that data point. Yeah. Wonderful catch. Thank you. Sir, I seem to generalize what would cause your hitch. Either would you like reporting on the equation or some trigonometry. Could be whatever you want. All I care about is this is the error times the derivative with respect to that underlying model. This is a very basic version of what looks like a chain kind of rule. And we're going to use that like nobody's business. So if you didn't know the chain rule before this class, you will definitely know it by the end, because we use it non-stop. But yeah, this is just set up for that. That's why it's generalizable. It's the error, which is totally generalizable for any model that has to do with prediction times how you compute the derivative. What's the change of the underlying model? We'll be able to generalize that. And in this case, it's just xj. All right? So now, right, getting back to this, what is our whole rule? It looks like this. Theta j, theta j t minus alpha sum over all the data. Answering the earlier question. At this point, we're doing what's called batch gradient, which we'll come back to in one second. Minus yi times xi j. Now notice I'm going to try and do some highlighting here. I hope this is OK for people to see. And I apologize if you're colorblind and this doesn't help you too much. But these are the same. OK, hopefully these are distinguishable colors, these j's. And then the i's are the other index that's going on. And these are the data points themselves. OK? So I look at every data point, and I'm doing the j-th component of each one. Right? Now by the magic of vector notation, here's what I can do. I just write this as this. h theta xi. This doesn't change. This is a vector equation. OK. OK. So this is basically looping over all the j indices at once. OK. If you're unfamiliar with vector notations, one of the reasons I'm doing this quickly is I will do it secondhand throughout the course. It's not deep. It's not like it requires a lot of stuff. Just requires a little bit of reps. Kind of repeat on them. Please. [INAUDIBLE] It's the same rate for every theta [INAUDIBLE] Wonderful question. So alpha u will typically set for an iteration, right? When you take a step, typically you can change it across steps. So one thing is here I've said alpha does not depend on t, the iteration step. But in general, it usually does. You usually decay the learning rate over time. So that's just what's done in practice. And that's done for really good things. What you don't typically do is have alpha depend on the data points itself, because then it's almost functioning like a free parameter, at least in classical machine learning. But in both optimizers, one of which was invented by our own John Duchi and other folks, you actually do change the alphas for every different coordinate, which was I think his first paper was out of grad and then out of delta. So people do things like that that are a little bit more sophisticated. And why they do those, I'm happy to explain offline. But right now, just think of alpha as a constant, like it's small enough that it's not going to lead you too far astray. Like if it were too big, you'd jump too far. And maybe you could do a little bit better. But maybe not too much. In fact, there's a very basic rule, which is called gradient descent rule, is actually very widely used. Very, very widely used. With just one alpha. Wonderful question. And those are the right questions to ask. Like, how does this parameter depend on what's around it? Start thinking like that as you go through the course. That's really, really helpful to understand. OK. So far, so good. So at this point, we know how to fit a line. Which doesn't feel like a huge accomplishment maybe, but I think it's pretty cool. And we fit it in this obfuscated general way that's going to allow us to do more models I claim, but I'll verify that in two classes. This vector equation here is just showing you like all the things that we computed. This is specific to the earlier point to the line, right? This gradient here is this guy. Those are the same. That's why this model popped out, OK? Awesome. And we'll come back to that in a minute. OK. Now, a topic that is practically quite important for machine learning is, and it was hinted at earlier, is-- and I'll copy this equation-- is, what do we do in practice? So one thing that we may not like about this equation is this thing is huge. In modern machine learning, we'll often look at data sets that have millions or billions of points, right? Well, it's not uncommon to run models where you're like, every sentence that has been emitted on the web in the last 10 years is a training example. Or every token, right, every word. And it would be just enormous right at that point. It'd just be a huge thing. So even doing one pass over your data is potentially too much. OK, now that's a really extreme and crazy version of that. That's a really extreme and crazy version of that. But you can also imagine situations where you're looking at hundreds or thousands of images, and you potentially want to look at fewer. So we'll come to how we do that in a second. Sorry, yeah? Is superscript above the first data set? Oh, it's t and t plus 1. These are the steps. Remember we started at theta 0 superscript. And then we moved from 1 to 2 to 3 to 4. And so this is just the recursive rule that takes you from theta t to theta t plus 1. So theta t is just whatever current theta we're on. Exactly. So you just imagine it as a-- it's a recursive way to specify we're at particular t. And here's how we evolve to t plus 1. Exactly right. You got it perfectly. [INAUDIBLE] Exactly right. So theta t, when we go back to here-- oops, sorry. I hope that's not dizzying. I wish there were a way to skip without making you sick. Is this vector. It's just a particular instantiation of those vectors, one for every of the d plus 1 components. Yeah, please. [INAUDIBLE] Yeah. So we will take steps, as I said, until we converge typically. Or we can take a fixed number of steps. I'm eliding that because for this particular problem, I can kind of give you a rule of thumb. I can point you at a paper that tells you how to set alpha. In general for machine learning, as I was kind of very obliquely referring to, we don't actually know how to tell that we've converged. And part of the reason is if you knew your model was this nice bowl shape, then you can actually prove that the closer you get to the optimum, the smaller your gradient is getting. And you can predict kind of how far away you're going to be. For a nice class of functions. For nastier functions and the ones that we're going to care about more, you can't do that. So it doesn't make sense to say that you found the right answer. And so I don't emphasize that. For these models, I can give you a beautiful story. Happy to type it up online and tell you. But in general for machine learning, honestly we just run it till it feels good. Like, oh, the curve stopped. It stopped getting better. And that was this DeepMind paper that said, hey, for these really large 280 billion parameter models. So their theta has 280 billion parameters in it. They're like, we didn't run it long enough. If we kept running it and it was better. And everyone who works in machine learning for long enough in the last five years has a situation where they forgot they were training a model. Hopefully you're not paying for it on AWS or GCP or something. And then you come back a week later, and it's doing better than you thought. And that is a very strange situation. So I don't have a great rule for this. For your projects, it will be clearer. I'm telling you the real stuff, though. Awesome. Please. This equation [INAUDIBLE] So we will only use it in the forward direction of going t to t plus 1. But you could imagine that it's reversible if you wanted. [INAUDIBLE] Oh, wonderful question. Yeah, yeah. So in the sense that if you shoot past-- let's go back here. So if you're here and you shoot past-- your step is kind of too big for the gradient, you kind of trust it too much, then the next iteration, the gradient will point in this direction, right. And so you'll step back. So it will actually have this ping pong. You actually want that to happen. It turns out the optimal rate-- I mean, I can bore you with this for days-- the optimal rate is actually when you're doing that skipping, for whatever reason. Yeah. But it's more intuitive for people to roll down the hill. Yeah. Wonderful point. You got it exactly right. Please. So is it possible for the update to be 0 even if h theta of xi is not necessarily yi three times? Yeah. So it's not possible for it to be exactly 0 everywhere. But it's possible to have gradients that are not giving you any information. Yeah, wonderful question. Absolutely wonderful question. And it's because it's a linear system. Right, so it's not full rank for the linear algebra nerds. Yeah. Wonderful question. Please. So let's say you have this functional thing, right. But you flip it so theta 0 is equal to 0, but on the other side. Would you only get the local minimum over there and not the actual-- Exactly right. Yeah. And that's what I'm saying. We used to worry about that quite a bit. Now we just say it's good. I wish I could tell you something better than that. But we'll get into why that's true. But yeah, when your function is in a good class-- and good here formally means convex and bounded in some way-- then you will provably get to the right solution. We'll talk about those conditions later. The reason I de-emphasize them now is because modern machine learning actually works on functions that look like this, not on the other class of functions. And so that's less important for students. And then you would rightly say, you told me all this stuff. I memorized all these conditions, and then I got into the workforce. I'm like, none of them worked and no one uses them. Like, yeah, that's true. And you're exactly right. And so people worry about initialization. Where do you start so that you are guaranteed to get a good model. In fact, there are a couple of awesome theory results. I'll take one from my group, one from Tengyu's, that said for certain class of these nasty non-convex models, if you initialize in a particular way, you would be guaranteed to get the right answer. Actually, I'll show you one in week 11, a simple version of that. Where if you initialize cleverly, there's not a unique answer, but you'll get the right one every time. Or sorry, class 11, not week 11. Yeah. [INAUDIBLE] Yeah, people will try random initialization. The problem is the trend is for models to be really expensive. So you run huge models. So any one run could cost a couple million dollars. I was looking at Amazon's GPT-3 service. I think it costs $6 million a month to run. So do you want to try to run it multiple times? If you got money, go ahead. But you want to try and do other tricks. People used to do a lot more random restarting. Now it's really sad to say this is the state, but we've evolved folksonomies. If you train these models, you know, what are the right parameters and what is everybody else using? And not everyone tries and explores everything, let alone how long you tune it, what optimizers you use. We all use the same stuff. But we don't have great formal justification for it. Maybe I'm exposing too much. It's not as bad as it sounds. There actually are principles in this area. I'm just telling you the plates that are broken, because they're more interesting to me. Yeah. [INAUDIBLE] Oh, wonderful question. We're going to come back to that. So the solution is, do I want to-- there's a phenomenon that a lot of people know about in machine learning, which is, if I take my model and I exactly fit my training data, maybe it won't generalize well. It'll fit to some error or some noise in the data, and this is roughly overfitting. We cover that in lecture 10. In lecture 10, at least when I taught it last, I also taught about something which is in modern machine learning. We realize that actually sometimes that concern is overstated for some models. And there's a wonderful paper by Misha Belkin that said you can actually interpolate, perfectly fit your data and optimally generalize for some classes of models. So that tradeoff isn't as clear for modern models as it was for old models. Maybe I should stop telling you about this stuff. But yes, in general, overfitting is a problem. You can overfit a model and believe your training data too much. Yeah. But this area is fascinating. I can obviously rant about it for weeks, so. Wonderful questions. Yeah, yeah. This is absolutely great. OK, so what do I want to tell you? So I don't want to tell you normal equations. I thought that was pretty clear from the beginning. So you read about those. If you want, I'll type up notes. Andrew's notes are great on this point. But I do want to tell you this one little bit with my last couple of minutes about batch versus stochastic mini batch. Because it actually is relevant and useful. OK. So when we last left off, we were looking at this equation. And we noticed this problem, that n was really big. And I just hopefully told you, n is really big, and so is d. The number of parameters is really big. So this is expensive. I wouldn't want to look at all of my training data before I took my first step, because probably my initial guess is not that good. That's why I'm training a model. If randomly initializing the model, which is something people try to do, gave me good predictions, I just use that. So obviously, I want to take out as many steps as I can. So here's what I'll do. I'll use mini batches. So what does mini batching do? OK. I won't get too formal. But basically what I'll do is I'll select some B, let's say at random. OK, I'm being vague here what random means. I wrote a bunch of papers about this. You can either pick-- randomly select them, or you can shuffle the order. And in conventional machine learning, we shuffle the order for a variety of reasons. And then I pick B items. So B is going to be much smaller than n. OK? All right. And then I update. I'm going to call it this, because this made me make it more clear. i equals 1. Or actually, I'm going to write it a little bit strangely. I apologize for this notation. Notation's better in my notes, but I want to write it this way, because it's easier to say. xi. yi. It makes it more clear what's going on, I hope. OK, what's going on? So I select a bunch of indexes, B. And then I just compute my estimate of the gradient over them. OK. I could even pick B to be size 1. Just pick a single point, as someone was alluding to earlier, and take a step. Now, what are the obvious tradeoffs here? On one hand, if I pick a step, that step is really fast. Right, if I pick a single element. It's super fast to compute relative to looking at the entire data set. But it's going to be noisy. It's going to have low quality. I may not have enough information to step in the direction I want to go. On the other hand, if I look at the whole data set, it's going to be super accurate about what the gradient is. In fact, I'll compute it exactly up to numerical issues. But it's super slow. Now, what people do is they tend to pick batches that are on the smaller side. Right, and you pick them as big as you can tolerate. And I won't go into the reasons for this underlying hardware. Happy to answer questions about it. But basically you pick batches that are kind of as many as you can get for free. Modern hardware works kind of in parallel. So you'll grab and look at-- expensive than looking at one, OK, on a modern platform. Now I'm using these noisy proxies. And you may think, am I still guaranteed to converge? And then the answer is effectively yes. And under really, really harsh conditions. In fact, I'm very proud of something that my first PhD student and collaborators, that Ben Recht and Steve Wright wrote about, this paper called Hogwild!, which is very stupid, has an exclamation point. But also got a 10 year Test of Time Award for saying that you can basically run these things in the craziest possible ways, and they still converge. These stochastic sampling regimes. OK, I won't go into details about that. My point is this thing is actually fairly robust. This take a bunch of error estimates and step them. And in fact, almost all modern machine learning is geared towards what's called mini batching. If you download PyTorch or JAX or TensorFlow or whatever you're using, odds are it has native support to give you mini batches. OK. And that is basically just taking an-- oops, taking an estimate of this piece here, and using that noisy estimate. And why might that make sense? Well, imagine your data set contains a bunch of near copies. If your data set contained all copies, then you would just be reading the same example and getting no information, right? If instead you were sampling that same example, you would go potentially unboundedly faster. And if you think about what we're looking at, when I told you images, like the images on my phone for my daughter, there are a lot of pictures of my daughters. A lot, OK? I'm a regular dad. I take lots of pictures. So that means there's a lot of density. And so machine learning operates in these regimes where you have huge, dense, repeated amounts of data. OK? All right. So this is going to come back. We're going to see this next time. We're going to see it in particular when we start to look at various different loss functions. We're going to generalize how we do prediction to classification next time. And then to a huge class of statistical models called exponential family models. To go back to the top. I skipped just to make sure you know what's here and what I skipped. We went through the basic definitions. We saw how to fit a line. We went through batch and stochastic gradient descent of how to solve the underlying model. We set up a bunch of notation. This is going to be one of the dryer classes where I'm just writing out all the bits of notation. And we saw how to solve them. Those will all carry over to our next brand of models. The normal equations, if you run into problems, blame me. I'm happy to take a look through them. They're relatively straightforward and the notes are pretty good. But I'll look at Ed if you run into any problems there, and happy to answer questions. With that, thank you so much time for your time and attention, and I hope to see some of you on Monday. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Feature_Model_selection_ML_Advice_I_2022_I_Lecture_11.txt | if it happens that I write too small font, please feel free to stop me and let me know. It's just, as I said, after a few lectures, I start to forget about it. So please remind me. OK. So I guess first let me briefly review the last lecture just very quick. So last lecture, we talk about these two important concepts underfitting and overfitting. So here, our goal is to make the transition work, right? So we want to generalize to unseen examples. And last time, we talked about two possible reasons for why your test error is not good enough, right? So one possible reason is overfitting. So overfitting means that your training error is actually pretty good, your training loss is pretty small, but your test loss is pretty high. And we have discussed the possible reasons for why you can have overfitting. And two possible reasons are maybe you have too complex of model. For example, last time we discussed that if you use a 5th degree polynomial for this very, very small data set where you only have four examples, then you may overfit. Or maybe you don't have enough data. If you have more and more data-- if you have a million data, then a 5th degree polynomial wouldn't be a problem. And also, we discussed another reason, underfitting. So underfitting is much easier. Underfitting, in some sense, basically just means that you don't have small enough training loss or training error. So your model is just not powerful enough so that you cannot even fit to the training data you have. So in some sense, these are kind of like two complementary situations. So in this case, you probably want to make your model more expressive, and in this case, you may want to make your model less expressive or less complex. So we use these words complex expressive a lot without a formal definition, right? So we say, some models are more complex, some models are less complex. Typically, you can somehow feel it. So a 5th degree polynomial probably is more complex than linear model. But actually, if you really want to have a concrete definition, it becomes a little bit tricky. So what is the right complexity measure of the model? Someone ask about that as well in the last lecture. And the answer is that there is no universal measure for what's the right complexity measure. And there are a few kind of complexity measures people often use. They all have their kind of like particular strengths and also, there is no real kind of formal theory to say which one is better. So these are kind of complexity measures that can be theoretically kind of justified in certain cases, but they are not universal. So what are the complexity measures? So I'm just listing a few just for kind of knowledge in some sense. So I guess the most obvious one is how many parameters they are. So if you have more parameters, then your model might be more complex. And this is very intuitive. However, the limitation here is that maybe you have a lot of parameters, but actually, the effective complexity of the model is very low, maybe all the parameters are very, very small. Then maybe you can say, in this case, maybe the complexity is actually not as big as you thought. So to kind of deal with this kind of scaling thing, so what if all your parameters are basically zero even though you have a million parameters. Then people consider norms of the parameters, right? But this may not be-- no, this is actually typical in the norms of parameters. It's actually very good-- they are very good complexity measures for linear models. Basically before deepening kind of arised, before, I think we are using norms as complexity measures a lot. And still, we use them in some cases. But this also has limitations. For example, sometimes you may, for example, one certain case would be that you have a low norm solution and you add some random noise to the model. And when you add the noise, you make the norm bigger. But actually, the noise doesn't really change the complexity because you add some noise, and when you take the matrix modification, you average out the noise to some extent. So there are also these issues. So some of the other kind of more than complex measures people have considered are, for example, something like Lipschitzness, whether your model is Lipschitz or maybe your model is smooth enough. And here, I'm using the word smooth relatively kind of like informal way. This could mean the bound on the second order derivative, it could mean the bound on third derivative, something like that. If your model is kind of like now it's oscillating or kind of fluctuating a lot, maybe that means it's not very complex. And there are other complexity measures for how invariant your model is with respect to, for example, certain translations, certain invariances that you should have in a data set, for example, whether your model is invariant to data augmentation. But in general, there is no very established theory on what is exactly the right complexity measure. And sometimes, it also depends on the data as we will see today. So sometimes, for example, suppose your data-- for example, let's talk about norms. So different norms-- what are the norms? Are you talking about L1 norm, L2 norm? Sometimes, L2 norm is the right complexity measure for certain type of data and sometimes L1 norm is the right complexity measure for certain type of data. So basically, I don't think this is kind of like there's anything super concrete. It's now have just a fixed suggestion for you to consider. So so in some sense, you should just keep this in mind and consider them when you do your own data set. So now we have to discuss the complexity measures. And now in the rest of the lecture, I think I'm going to cover two things. So one thing is that-- so once you have some kind of guess on what's the right complexity measure you are looking for, how do you make the complexity measure small. So how do you encourage the model to have small complexity. So it's easier to do this because you just change how many neurons or how many hidden variables on deep networks, whether you can change number of parameters. But if you only change the norm, what do you do? So that's called regularization. I'm going to discuss that in the first half of the lecture. And then in the second half of lecture, I'm going to talk about some more general advice, for example, how do you tune your hyperparameters. When you do regularization or when you choose your model complexity, right, you're going to use a lot of hyperparameters, meaning you're going to choose how many parameters you have, you're going to choose how strong your regularization is. So how do you tune your hyperparameters and on what it is you should tune your hyperparameters. And at the end, I'm going to probably spend and go about some ML advice. So for example, how do you design ML system from scratch. There are a lot more things in reality, more than what you do in research. So that part, I will use some slides to talk about some general ideas on how to design ML system in reality. So that's the general kind of introduction of this course, of this lecture. I'm going to start with regularization. Any questions so far? So regularization-- I think we have probably mentioned this thing sometimes in the previous kind of lectures just because we have mentioned this informally. So by regularization, mostly we just mean that you add some additional term in your training loss to encourage low complexity models. So for example, we use J of theta as our training laws and then you consider this so-called regularized loss where you add a term lambda times R theta. So here, this R theta is often called regularizer and lambda is-- I think there are different names for this, but you could call it regularization stress or regularization coefficient, regularization parameter, regularization stress, whatever you call it. Maybe let's call it regularization stress. So this lambda is a scalar and R of theta as a function of theta which will change as theta changes. And the goal of this R of theta is to add additional encouragement to find model theta such that R of theta is small. So for example, typical R of theta could be something like L2 t regularization. So you say R of theta is that you take-- this is probably the most common choice. You take the L2 norm square and you multiply by 1/2. the 1/2 doesn't really matter. This is just some kind of convention because either way, you're going to multiply lambda in front of it. So whether you have we just change our choice of lambda. But this is just a convention. And so this is called L2 regularization. Also in deep learning, people call it weight decay. There's reason why people call it weight decay. I guess, probably, I wouldn't have time to discuss it today. So in the lecture notes, there is a very short paragraph you can see. Actually, if you use this regularization in the update rule, it will look like a weight decay. So there's one step in the update rule where you decay your parameter-- you'll shrink your parameter by scalar. But anyway, so it's just the name. Either it's called weight decay or you call it L2 regularization. So that's one of the pretty common one. And you can see that if you add this thing to your loss function, you're minimizing your loss function. So then you are trying to make both the loss small and also make the L2 norm of our parameter small. And the lambda in some sense is kind of controlling the trade-off between these two terms. If you take lambda focus on the regularization, you just only focus on low norm solution, but maybe you don't fit your data very well. If you take lambda to be, for example, 0, literally 0, then you are not using your regularization, you are just only fitting your data. And actually, when you make the lambda very, very small, this can still do something. So even say the lambda is 0.00001, very small, still this might do something because maybe there are multiple theta such that J of theta is really, really close to 0 or maybe even literally 0. So if you don't have this, then you are not doing any type working. If you literally make lambda 0, then you are just picking one of the solution where J theta is 0. But you don't know which one you pick, but as long as you add a little bit regularization that you are using this as a tiebreaker in some sense. So you are finding some solutions such that J theta is very, very small, but you use the norm as a tiebreaker among all of these solutions that have very small training loss. So this is probably the most typical regularization people use. And another one is the following-- so you can take R theta to be the so-called 0 norm of the parameter. But actually, this is not really a norm, this is just a notation. So this is really just defined to be the number of non zeros in the model in theta. So you count how many non-zero entries in theta, and that's what this notation is for. So sometimes people call it zero norm, but it's actually not a norm literally. It's just the number of non zeros in a parameter. And sometimes, people call this sparsity because if you have very few non-zero entries, then it's sparse, otherwise it's dense. So if you add this to this thing, then you're going to have a different effect. You are trying to say that I'm going to find a model such that the number of non zeros in it is small. And this is particularly meaningful for linear models in the following sense because if you think of this theta as a linear model, then-- say you have theta transpose x, suppose you have a linear model. And then what is this? This is really just sum of theta i x i, i from 1 to d. And then you can see that the number of non zeros is really how many-- if suppose you have as ith non-zero entries in theta, that means that you are using only ith of the coordinates of x i's. So basically, the number of non zeros in theta is the number of coordinates or number of features you are using from xi. So you can imagine that maybe, for example, for some applications, you have a lot of coordinates in your input features. So you have so many different informations, but you don't know which one you should use to predict. For example, suppose you want to predict the price of a house. Then you have so many different features. But some features may not be that useful. So then you can imagine that could be a situation where you should use this as a regularizer because you want to say, I want to use as few features as possible, but also I want to make sure my training loss is good. So I find the simplest explanations of the existing data or simplest, meaning that you want to use as few features as possible. And once you find this theta such that theta is sparse, so suppose you have a theta such that you only have a few non zeros in theta, then you are selecting the right feature. So in some sense, you have a sparse model, means you are selecting the features because those non-zero corresponds to the features that are selected by the model. So people often call this feature selections in certain kind of contexts. However, you may have realized that this regularizer, as a function of theta, this sparsity one is not differentiable. So you're just counting how many zeros there are. So suppose you have one entry that is maybe it say that it's just 000. You do an infinitesimal change in any of the coordinates. You are going to change the value of this function by a lot. So that's why it's not differentiable. A differentiable function should satisfy that if you change theta infinitesimally small, then you should change the function output by a small amount. But actually here, currently the sparsity is zero. But you've changed theta a little bit, your sparsely becomes 1. So you can have infinitesimally small changes to make the regularizer value change by a large amount. So that's why it's not differentiable. So because it's not differentiable, then you don't have gradients, you don't have derivatives. So that cause a problem in using this. So basically, even though I told you this is a regularizer, but literally in the reality, nobody use this exactly in their algorithm because if you use it, you put it here, but this term has no gradient. How do you optimize it? So because it's non differentiable, so then what we will do is that you have a surrogate. So this is a typical surrogate. The reason why this is surrogate is a little bit kind of tricky. But this has been a surrogate for the sparsity. So this is a differentiable surrogate for the sparsity of the model. So you use the 1 norm. So here, 1 norm just means the sum of the absolute value, is the sum of the absolute value of each coordinate. And you can see, I wouldn't attempt to give you a very formal justification for why this is so good, why this is a good surrogate for the zero. One reason could be that you can see at least the 1 norm is closer than 2 norm to 0 norm. And another reason is probably-- this is not really very solid mathematical reason, but it's just to give you some intuition. So suppose you think of theta as a vector in 01, suppose you really just have a binary vector, then indeed this is equal to this, right? So that's probably another kind of intuitive reason why they are somewhat related. But you can see a lot of problems with this argument. So why I'm assuming theta is only taking value from 0 and 1, right? So if they are from 0 and 2, then these two are no longer that related anymore. So I'm not saying that this is really a good argument for why they are related. So if you really want to say they are related or this is a good surrogate, I think you have to go through much more math. Any questions so far? [INAUDIBLE] regularizer with the second norm regularizer? You mean the 1 norm and 2 norm? Yes. Yes, that's what I'm going to do next. All right, so that's a great question. So why you want to encourage sparsity sometimes and sometimes you want to encourage the 2 norm to be small? So let's answer that question. Let's think of this as a surrogate for the sparsity. So the question I'm trying to answer is why sometimes this is better, sometimes this is better. So in some sense, the fundamental reason is just that in some sense, another way to think about regularization, instead of just encouraging low complexity, is that regularization can also impose structures kind of like prior beliefs about the theta. So this is probably another-- at least one of the other ways to think of regularization. This also imposes structures. And what are the structures? The structures probably sometimes, students, you have a prior belief. So for example, suppose you believe, you have a prior belief because of the common knowledge. So such that you somehow believe that faith is sparse. So in sparse theta. Then in this case, you probably should just use R theta is the 1 norm or the zero norm. Just because you believe that your model is sparse, why not just encourage that? So when you have this belief, then you should say, OK, if you encourage the 1 norm, then in some sense, you limit your search space. You limit your search space. Before, you are searching over all possible parameters. And now you're only searching over low 1 norm parameters. And because you believe that your true model is having low norm, then even narrowing the search space is always-- narrowing your search space is always helping you because you didn't lose anything, because you know that every model that you excluded are not going to be the right solution. So you narrow search space, the new search space still has the right model, then why not do it. So that's another interpretation of the regularizer. So it can impose additional prior belief in the structure of the model. So if you believe in 1 norm, then you should encourage 1 norm. If you believe that your true model has a small L2 norm, then you should encourage small L2 norm. And if you go into more mathematical theory, I think L2 norm typical corresponds to situations where you believe that all the features are useful, but you have to use them in a combination. So you have to use each of the features a little bit and L1 norm or L0 norm typically corresponds to situations where you believe only a subset of the features are meaningful and you should discard other ones because other ones are just kind of there to confuse you in some sense. So if you believe your model is sparse, then you should use L1 norm, and if you believe your model shouldn't be sparse, then typically, people use L2 norm. And if you have a linear model, so suppose you have a linear model, and then this loss-- if you use one regularizer this is called LASSO. I guess here, I'm just defining the name because I think it's probably useful for to at least heard of this acronym. Actually, I don't know what it originally stands for. But this has been like there for 20 or 30 years, which is a very, very important algorithm. For linear model, you apply L1 normalization and it's called LASSO. Everyone in machine learning should know the acronym. I'm taking a little bit more broader perspective. So if you think about nonlinear models, deep learning models, so what are the most popular regularizers these days? I think L1 norm is not used very often. And actually, pretty much, it's never used. I don't know exactly how frequent it is, but I think probably less than 10% of models use L1 regularizer, maybe even less than that. much overestimated. Maybe 1%. But the L2 regularization is almost always used, even though sometimes you only use a very weak L2 regularization. I'm talking about deep learning model. Sorry, maybe let me just clarify. For linear models, you can try almost anything. Anything would be reasonable. And you probably should try all of them. You could try 1 norm, 2 norm, and sometimes you can try different norms, which I didn't write down. But you can try 1.5 norm, something like that. So for nonlinear model, for deep learning models, I think basically, L2 norm is something that you almost always use, but you only use it with relatively small lambda. People generally don't use very large lambda. I don't know exactly what's the reason. Researchers don't really know that much either. But a small L2 regularization is typically useful for deep learning. And in deep learning, I think some of the other regulations could be useful-- for example, you can try to regularize the lipschitzness of the model, and you can try to use data augmentation, which we probably haven't discussed. I'm going to discuss that later lecture. But you can use data augmentation which tries to encourage your model to be invariant with respect to kind of translation, cropping, these kind of things for images. I think those are pretty much the only regularization techniques in deep learning. Question? So this, is just kind of a selfish question, just kind of for the [INAUDIBLE]. Would you suggest initially using the L1 norm to eliminate the features and then went on to use the normal? That's a very good question. So I think this kind of algorithm was pretty popular in bulk before deep learning era. So when you use linear models, I think using L1 to do a selection, and then you use L2. I don't know how exactly how popular they are, but this is definitely one algorithm you could try to use. In deep learning, I think it's probably less likely to be useful, but also depends on the situation. For example, if you have enough data, maybe you are more or less linear model case. But you just need a nonlinearity to help you a little bit. Maybe then, in that case, you should still mostly use some more linear model type of approach. If you are in the typical deep learning setting-- for example, you do for a vision project, you have images as your inputs-- I think in those cases, you probably don't want to select your features first. I think all the inputs are useful and use them as much as possible, and you just want to let the networks to figure out what's the best way to use those inputs. Thank you. Any other questions? By the way, this lecture will be pretty-- we don't have a lot of math. Most of the things are about just the-- I don't think there is even theory here. Sometimes, they are just experiences because especially if you talk about the modern machine learning in the last five years where everything seems to change a little bit-- so I cannot say anything with 100% guarantee. I can only say, OK, it sounds like people are doing this a lot. That's the best thing I can tell you in some sense. So feel free to ask any questions. Any other questions? And the next thing I'm going to discuss is the so-called implicit regularization effect. This release more to the deep learning. And so one reason that people started to think about this is that-- I haven't told you what exactly it means. So one motivation that people study this kind of research is that people realize that in deep learning, you don't use a lot of regularization technique. So you use L2 as I said, you only use a weak L2 regularization. And often, some of these lipschitzness ones, but they only help a little bit. They can be useful, but people don't necessarily use them very often. So why, in deep learning, you don't have to use strong regularization? At least you can feel that the regularization stop mattering that much. It still matters when you really care about the final performance, you care about 95% versus 97%. But even you don't use regularization, sometimes you get reasonably good performance. So that's why people, especially theoretical researchers people, are wondering why you don't need to use strong regularization in deep learning. And this is particularly mysterious because in deep learning, people are using over parameterization. We are in this regime where you have more parameters than the number of samples. Recall that in the last lecture, we have drawn this double descent where you have these kind of things-- here is the number of parameters and this is the test error. And we have kind of discussed that its peak might be just something about the sub optimality of the algorithm which let's say you don't care for the moment. But at least you have to care about why it's going down still here. So why will you have so many parameters, a lot more parameters, you can still make your model generalize? And it seems that more and more parameters makes it looks better. So the over parameterized regime is kind of mysterious because you don't use strong regularization, but you can still generalize. So that was kind of the motivation for people to study this. And people realized that even though in this regime, suppose you don't use any explicit regularizer, you don't mimic the lambda 0, literally 0 in this regime, still it can generalize. And the reason it can generalize in many cases is because you can still have some implicit regularization in fact, even without explicit regularizer. And where that effect comes from, what kind of make that happen? The reason is that the optimization process, the optimization algorithm, the optimizers can implicitly regularize. So why this can happen? I think the reason is that-- let me draw a kind of illustrative figure which I kind of use pretty often. So suppose let's say this is the parameter-- suppose this is the loss landscape, the loss surface. So meaning that here is theta, let's say theta is one dimensional. And because we are in this deep learning setting where we have a nonlinear models and non convex loss function, so maybe a loss function, it looks like this. So this is the loss function. And you have 2, maybe you have multiple global minima of your loss function. So this is a global minimum, this is a global minimum. But you have multiple global minima in your loss function. However, here, I'm talking about training loss. If you really look at the test loss, they will look a little bit different. The test loss would be different from the training loss. So test loss maybe look like something like this. Maybe let me draw something according to my figure so that-- So this is the training loss. And test loss probably look like this. So that means that even though both of these two global minima are a good solutions from the training loss perspective, one of them is better from a test performance perspective. This global minimum is better than this global minimum because the test performance is better. And in some sense, the regularization, in fact, is trying to choose the right global minimum. You want the regularization, in fact, to choose the right global minimum so that you can do some type breaking or you can encourage certain kind of models. Maybe this model is more kind of like lipschitz or this model has more norm than this model. So that's why you prefer this one. So if you use explicit regularization, what you do is that you're going to say, I'm going to change the training loss, I'm going to add something to prefer this one than this one, I'm going to reshape the training loss. That's what the explicit regularization would do. But in place, the regularization will do is the following-- so if you consider an algorithm that optimize this-- for example, suppose you run algorithm. This algorithm just always initialize-- this is initialization. And you do gradient descent. So you're going to do something like this, and you converge to this one. So this algorithm will only converge to this one but not this one just because you initialize at this far right. So that is, in some sense, a preference to converge to this global minimum over this global minimum because your algorithm somehow prefer one global minimum than the other just because your algorithm has some certain specifics, right? So the initialization make it to prefer to converge to this one. And there could be other kind of effects. For example, if you use bigger step size, maybe you are more likely to converge to this one maybe or maybe vice versa depending on the situations. So this is a very illustrative thing with one dimension that you don't really have a lot of flexibility here. But if you have a very, very complex thing and if you run a different algorithm, different algorithm will converge to different global minimum. And that preference to certain type of global minimum is, in some sense, is a regularization effect so that you don't converge to an arbitrary global minimum. Does it make some sense? Can you just repeat where you said-- so how does having a large number of parameters ensure that it initializes at that point? Yeah, I was selling on that in some sense. I didn't really say why the initialization has to be here. This is an active area of research. So what we are sure about is that the algorithm could have this effect, the algorithm could possibly prefer certain kind of global minimum than the others. But why it would prefer which kind of global minimum, we don't exactly know. For certain kind of toy cases, we know. But for the general cases, we don't. I'm going to show you one cases where we actually can say what does the algorithm prefer to do. But that's very, very simple case. For general case, I think this is still a very open research. Question. I saw two other questions here. [INAUDIBLE] No, no. Here-- no, no. What do you mean by the optimizers? What is on the access? The access is the value of the parameter. They only have one parameter. I'm drawing the landscape of the parameter. And I can only draw something in one dimension. So this is the value of the parameter. You are just tuning this parameter, you are doing good instant. And this is the loss surface. So it does depend on where you initialize. So if you initialize at different places, you're going to converge different global minimum, and they may have different generalization effect. So in practice, we can use multiple different algorithms and then just choose the one that has the best performance? That's pretty much the right thing to do. Of course, there are some-- I'm going to discuss this in a bit more detail later. But basically, you can have some intuition. The theoreticians have tried to understand what kind of like algorithms can help generalization. But I think the conclusion, at least so far, is very far from conclusive. They can give you some intuition, but they are not going to be predictive. They are not going to just tell you what to do. So you still have to try a lot. Yeah, going back to this. This is just one dimension. Another way to think about is that you can think of like a two dimensional question. For example, you are skiing in a ski resort. So your objective is basically minimizing your-- you're trying to go downhill. That's the objective. And the ski resort probably have a lot of villages that you can eventually go home. There are multiple parking lots. So in some sense, you are saying that one of this parking lot is great. So one of this parking lot is really [AUDIO OUT].. So you want to go to that one. But different algorithm would lead you to converge to different parking lots. So for example, someone is doing very faster skiing then when you do that you cannot go to those kind of small trails. So then you go to one of the parking lot. And some other one prefers a wider trails and then you go to the other parking lot. So different algorithm will need to lead you to different parking lot and different parking lots have different generalization performance eventually. So this is the high level intuition. So let's see. I'm going to discuss a concrete case which will also be part of a homework question. So this concrete case, just to give you a concrete sense on how this could even be possible-- so I'm going to show you the high level thing and there is some mathematical part which will be in the homework. So this is in the linear model. So interestingly, even though this implicit regularization effect was mostly discovered after deep learning start to be powerful, but actually, you can still see it in linear models. And that's how researchers start to do research. So let's say, suppose we are just in the most vanilla linear model setting where you have some under the exam data points. This is just the trivial linear regression. And your last function is something like just the L2 loss. This means squared error, something like this. You have a linear model. But let's say one different thing is that we assume n is much smaller than d. So you have very few examples and a very high dimension. So what is d? d is the dimension of the data and n is the number of examples. I'm going to assume n is much smaller than d. So this is overparameterized, you have multiple global minimum. Why you have multiple? So first of all, you have multiple global minimum. Why? Because I'm claiming that they are minus theta such that minus theta satisfies yi is equal to theta transpose xi for all i. Why? Because how many equations here you have. So this is the equation to make 20 plus 0, which means global minimum. So if you have all of this equality, then it means you are at a global minimum of this training loss. And why there are multiple theta such that you can satisfy this? That's because you can count how many equations they are. So they are n equations and d variables. And these are linear equations. So I guess the linear algebra tells us that if you have n equations, d variables, and if n is less than-- I think if n is less than d or d minus 1, n is less than d, then you can have at least one solution. And if n is much, much smaller than d, then you will have a subspace of solutions. And that's called the-- what's the kernels of the-- anyway, you have a subspace of solutions for this kind of linear system equations. And that's why you're going to have multiple global minimum of the training loss because the entire subspace of solutions are global minimum of the training loss. So the question is, which one you're going to converge to. So which one your optimizer will choose. So it turns out that if you use gradient descent with zero initialization, then you are going to choose the one with the minimum L2 norm. So here is the claim. So the claim is that if you do gradient descent with initialization theta is 0, this will converge to the minimum norm solution. So what does the minimum norm solution mean? Formally, it means that you converge to the solution with the smallest L2 norm among those solutions such that those global minimum of the loss function. So when you use gradient descent, you are not only just finding a theta such that the loss function zero. So typically, when you think about optimization, the optimization is trying to find a solution such that the loss function is minimized. That's true. You definitely find a solution such that the loss function is minimized. But you actually have a tie breaking effect among the solutions such that the loss function is minimized, you actually choose the one with the smallest L2 norm. So I guess in some sense, the kind of intuition is the following. So I'm going to try to draw this. This is a little bit-- I need to try to draw this well. So suppose let's say the intuition is supposed let's say you have n is 1 and d is 3. So you just have one equation, one linear equation, and you have three variables. So that means that the final solution is a two dimensional subspace. So let me try to draw this. So here, the subspace I'm drawing here is-- this is the family of theta such that you satisfies that the loss is zero. This is the subspace. So you have a subspace of solutions. But which solution you converge to-- that's the question. It turns out that if you start with-- let's see. Maybe I will write here. It turns out that you're going to find the solution such that this is the solution I'm going to find. This is the solution-- how do I draw this? drawing this is a little bit challenging, I guess. How did I do this? I think did this. So you consider that you projected 0 to this subspace so that you find this point. This point is the solution with the minimum norm that is closest to zero on the subspace. And this is the solution that you will find. You are not going to find other solutions with gradient descent with initialization zero. So basically that's the claim. The claim is that you can find this particular solution but not the other solutions. And the reason is actually, fundamental reason is pretty simple, especially if I draw it in this way. Of course, if you want to proof it, it's a little bit more complicated. So the reason is really just that you start with 0. This is where you start with, gradient descent. And you have a property such that when you-- how do I-- maybe let's erase this. So you have a property such that if you start with-- initially it's 0. And then at any time-- so your theta is always in the span of all the data points. Here, I'm going to have actually one data point. So basically, your theta cannot move arbitrarily in any places. So you have a restriction on where the theta can go. So actually, for this particular case, what happens is really just that you are just moving along this direction. And here, you find this point that has the substance. And that's what the gradient descent is doing. So gradient descent will not do something like this, will not converge to here, it will not converge to here. It will just directly go to this closest point, the point that is closest to 0 on a subspace. So this is clearly a property of the optimizers. You can imagine you may have other optimizer-- suppose you designed some crazy optimizer which does this or does this. Then you will convert to a different point. But if you use gradient descent, you're going to do this. And the main property you show that gradient descent is doing this is by saying that the gradient descent is always in the span of the data. I think this is actually something we have approved for in the kernel lecture for a different purpose, not for this purpose. Remember that in the kernel lecture, we try to show that your parameter is always in a linear combination of the data. And then there, the purpose was that you want to represent it by the betas in that lecture. So it's a different reason, it's different goal but it's the same fact. Your theta is always in a span of the data. So are you saying that the optimal theta is in this span, it's always in this span of the data? The optimal-- no, this span is defined to be all the solutions that have zero loss. So these are all-- the span is-- that's my definition of the sub space. This is the family of solutions that have zero training loss. So the question is, which one I'm going to converge to. I was arguing that there are multiple global minimum. So this whole span is our global minimum. All of them are global minimum. And which one you want to converge to. So different algorithm probably would converge to different points. So if you run gradient descent, you are going to converge to one particular one in this span. But this phenomenon also shows up in other cases, but they're going to be much more complicated. I think there are only a very limited number of situations where we can theoretically prove where you converge to. But it's almost always the case that the optimizer has some preferences. The optimizer will not converge to an arbitrary zero training loss solution. It will converge to one particular zero training loss solution. And sometimes that solution just generalize much better than the other ones. [INAUDIBLE] So only for linear models, the family of 0 loss solution is a span, right? So if you have non-linear models, then the family of solutions satisfying this wouldn't be a span. Maybe it's a manifold, some other weird structure, right? So in that sense, this is very special I think it was like, OK, I understand that it's like this [INAUDIBLE] how are we sure it's going to be that constrained optimization [INAUDIBLE] Right. So I didn't show you the full proof. So this point turns out to be-- the point that you converge to turns out to be the minimum norm solution. And it turns out that actually, you're just going straight-- at least for this one case. So actually, it's not even always true that you are going in a straight line. But you always go in this subspace. So am I answering the question? Maybe I didn't. Can you prove that-- You can prove it. I think the homework question actually asks if that's your converge. This point is exactly the minimum norm solution and also you're going to converge to that. OK. Actually, you can have a pretty concrete representation of this point. It's really just some inverse some of the matrix times something. You can compute what exactly it is and you can show your converse to the point. I'm not sure whether the homework ask you to show-- I think the homework ask you to show both. But we have a lot of hints along the way. It's not going to be just show this, that's it. And maybe for example, another-- just to give you a sense of what this kind of things can change where suppose you initialize here, then you wouldn't converge here. So you probably would converge to somewhere here. And if you use stochastic gradient descent, you probably wouldn't converge exactly here either. You'll probably converge somewhere differently. So where do you exactly converge to is a very hard question. We don't really know. The only thing we know right now, I think, firmly is that this matters. If you use different algorithm, you converge different solutions, and different solutions generalize differently. So you have to consider the effect of the optimizers. And going back to this, the reason here is really-- so I guess this, in some sense, this kind of is trying to explain why you can generalize here. That's because of this implicit regulation, in fact, even though you don't have regularizers, you still implicit regularized the L2 norm. And that's why in this regime, even though you have a lot of parameters, but actually, you are still implicitly regularizing the L2 norm. And if you look at the norm, the norm will look like this. So this is the norm as you change the parameter. So basically, this is saying that when you have a lot of parameters, actually your real norm is actually relatively small. And that's why you can generalize. So the reason why you don't generalize in the middle is because this minimum norm solution is not actually doing well in the middle for some other reason. So the norm actually turns out to be big. But actually, the norm is very small in over parameters regime even though you use a lot of parameters. So now let's talk about how do you really find out what-- I've told you that we don't know too much about how does the optimizer change things. So we also don't know exactly how does the model complexity change things. So you only know some intuitions. So you know that if you have more complexity, it turns out to be more likely to overfit. But you don't know exactly what is the right complexity. So how do you find out the right model, the right optimization algorithm, the right regularizer, all of this? You have so many decisions. You probably have have to make in this machine learning algorithm. So how do you find out what's the best thing? So I think the technical way is just that you use a validation set to figure out what's the best decision. So maybe just to motivate that just briefly-- so the easiest way to do is that you just use a test set. So you have some test set and you just try all kind of algorithms, all kind of models, all kind of regularization strength. And you see which one has the best performance test set. So that's OK as long as you only use the test set at the end. So you try all of this algorithm in advance, and then you collect some test set, or maybe you have to test set before but you never touch it. So that's OK. So if you only use the test set once, then you can use the test set to evaluate the performance of all possible algorithms or all possible models you want to use. So that's a good thing. So however, the problem is that sometimes, you want to do this iteratively. You want to look at a test set and see what the performance is. And then you go back to say, OK, maybe I'll change my model size, maybe I'll change my optimizer. So maybe I'll change from gradient descent to stochastic gradient descent, maybe I want to add some regularization effect, add some regularization function. So if you want to do it iteratively, then what I said before was not going to work. That's because typically, if you have a test set, you can only use it once because if you use it multiple times, what happens is that you could overfit the test set. So basically, your later decision becomes overfit-- our decisions overfitting to the test you have seen before. So the validity of the test set is only insured when you only see the test set after you do the training. And if you see the test set and then you do the training, and then you test it again, then the second time it has on test set, it will not guaranteed to be valid. So you may overfit to the test set. Does it makes sense? I'm trying to be not over complicate this. So that's why I'm trying to use informal words for it. But if there's any questions-- So how do we deal with this. So the test that, we can only use once. Or at least we can only use it-- we cannot use it interactively. You can't see the test set, train, and then see the test set again. So one way to deal with this is that you have a holdout, or you have a validation set. So basically, you split the data into three parts. So one part is called training set and one part is called validation set and also test set. And for test set, you have to be very careful about it. You shouldn't touch it. This test set is only at the very, very end, you are using a test set to evaluate your performance. But the validation set, you use this to tune hyperparameters. And the parameters here-- I mean, all the type of parameters that you are choosing-- for example, the batch sets, the Lambda in the regularization, maybe the choice of the optimizer, the number of neurons you're going to use in your deep learning model, how long you are going to train-- all of this decisions that you are going to decide in this process, they are called hyperparameters. And so you're using the validation set to tune the hyperparameters and you are using a training set to tune the real parameters to optimize the parameters. So I guess typically, we don't know to tune the parameters. These parameters are just a numerical numbers in the model which, either way, you don't know where the means are. But hyperparameters are those kind of things that you know their meanings-- batch size, learning rate, step size. So they all have some meanings. And you want to use this validation set to tune the hyperparameters. So basically, the processes that you start with the training, and then you start with training with some hyperparameters, and then you validate on your performance. And then you go back to tune again maybe using some other parameters, and then you do this iteration for many times. And after you are done with everything and you find out a model that you are happy with, which by you are happy with, I mean that you find out a model that is very good on a validation set, then you finally test the model on a test set. And that can be only done once. So in some sense, I'm not sure how many you have seen this Kaggle competition. It's kind of structured exactly like this. So there is this online platform where people release their data set and set up some kind of challenge for people to submit their machine learning model to solve their tasks. So basically, in a Kaggle computation, the organizer have a test set which nobody can touch at all. This test set is only used once at the very end when you decide who is the winner. And then the organizer released this to-- actually, I'm not sure. Sometimes they give you a division. So they say, this is the validation set, this is the training set. Sometimes they just release all of them to you and then you can divide it yourself. Even they release in this format, you can redivide them, whatever you want to do. So let's say suppose you have divided all the training examples into these two sets. You can do whatever kind of optimization you want. And I think typically, they do have a designated validation set, which is used for computing the scores on the leaderboard. There's a leaderboard which tells you how well you are doing against others, at least temporarily. So that's the validation set. That's evaluated on the validation set. But this leaderboard may not be exactly the same as the final rank. It's possible that finally, you found out that somebody is succeeding in the leaderboard. But eventually, in the very final test, the performance is not as good as the validation set suggest. But this is the general setup that people are doing. Does it make some sense? So one common question that people kind of generally ask, which I ask myself as well, is how reliable is validation set is, right? So if you have very high performance on the validation set, should you trust yourself? So on one side, you shouldn't trust yourself trust your validation set performance, why you need a test set? The test set is supposed to give you the final verdict in some sense. It's something that guarantees to give you the right answer. So the validation set is never 100%. You probably shouldn't So on the other hand, empirically, so people realize in the last five years that-- I think there is a sequence of paper on this. People realized that actually, the validation set performance is actually well carried with the test set. So it's a reasonable indicator about how good your performance is on test set. It's just there is no theoretical guarantee that these two are exactly the same. But in most of cases, if you don't do anything crazy, you don't somehow just memorize the entire validation set by creating some kind of some lookup table kind of things, then typically, the performance on the validation set is very close to test set. And there is a very important paper probably three or four years ago by Berkeley people. So they actually look at maybe 300 Kaggle competitions, and they look at the best performance, the rank of the performance on the validation set on the leaderboard. And they look at how they correlated with the final winner, the final performance. And they found that they are very correlated which suggests that the validation set is actually a pretty good indicator for the test set even though it's not guaranteed. And this states the typical machine learning practice is that if look at when people publish papers, so in some sense, people publish results based on validation sets. So for example, if you look at ImageNet performance, in some sense, the so-called test performance that people report is actually a performance on the validation set because that so-called test set has been seen so many so many times. I think actually-- I don't know exactly whether there is a label, name for it in the ImageNet the official data set. But at least that set that you report your performance with, that set shouldn't be considered as a test set because test set, you should only use it once. But actually, people have used it so many times, maybe a million times. So basically, abstractly speaking, I think these days, when you publish paper, you use the validation set. Only when you have the Kaggle computation, you use the test set to really decide the winner. But empirically it sounds like they are very close. So actually, that's why we are not worried too much about it. Any questions? What's the name of the competition? I think this is-- I think it's called Kaggle or Kaggle. I don't know how to pronounce. This is a platform. So the platform hosts a lot of competitions, maybe 100 every year or something like that. You can submit your model. And sometimes, there is a prize for winning the competition. And by the way, I think this validation set-- sometimes, now people call it development set as well. I don't know how popular this name is. But at least if you say validation set, I think everyone would know what you're talking about. Development set, I think most people would know as well. But it's a relatively new term in the last five years. [INAUDIBLE] training sets and validation sets as part of a bigger-- so that once essentially decided on what type of parameters you want to use, you kind of have a good training and validation set and just choose those randomly? Right. So how do you do the split? So the most typical way is that you just split randomly. You reserve probably a tenth of the dataset as validation set, maybe 20% depending on how many data you have. And I think what you are probably thinking is this so-called cross validation which does something much more complicated. You can kind of split your dataset into-- you can do multiple splits and try multiple experiments on different splits. So I think I'm not going to cover it for this lecture mostly because I think these days, if you have a large enough data set, typically you just do this static split just because it's much easier, you don't have to run your algorithm multiple times, and this is just almost like in most of the larger scale machine learning situations, you just use this. But if you just have 100 examples, then indeed as you said, if you fixed 20 examples as validation set, it's a little wasteful. So then you have to do some cross validation. So we have a section in the lecture notes about cross validation. There's a description of the practical. I think if you're interested, you can read it. It's nothing very complicated either. So I guess-- yeah, so I'm going to use the last 20 minutes talk about some more applied perspective. I'm going to use the slides. So I guess I'll-- See if this can work. OK, great. So it's not centered, is it? OK, sounds good. OK, good. So I think this lecture-- so we are talking about some ML advice. So, here, these slides are made by our other instructor, Chris Re with the help of Alex Ratner. I'm pretty much just repeating whatever he's saying in the slides. So I think the slides used to be a little bit longer than this. I'm going to release a longer version as well. So I shorten it to only And part of the reason is that I think the slides also contain something that have been covered on the whiteboard. And also, part of the reason is that there are some applied parts, which I think we don't have a lot of time to discuss in this quarter. But I'm going to release the longer slides as well for reference. So these set of slides are mostly for a little more applied situations. For example, you are thinking that you are, for example, your start up and you are doing machine learning to solve some concrete problems. So it's a little less like a research because you're going to see that we're going to have much more issues than a concrete research setting. In research, actually you sometimes also have this, right? In research, the most typical setting is that you probably have a very concrete data set. You know the input, output, you know everything. There is no any room in flexibility. You cannot redefine the problem. And you just want to get the best number. That's one type of research. I don't think this is the most typical one either. But this is one type of research. And then from there, you can have more and more flexibility. You can change your data, you can rephrase your problem, you can find out what's the right problem. And once you really do it in industry, then it's going to be much more complicated. So some disclaimers to start with-- I think this is Chris disclaimer, which is also one. So these are like there is no universal what is ground truth here. So there is no ground truth, it's really just some experiences from people doing it in real life. And things change over time. Sometimes people thought that was the right thing to do in five years ago, and now things changed. So I'm going to go through this a little quickly. I'm going to omit some of details as well. But feel free to stop me. So in some sense, there are many phases of ML project if you really do it in industry. So for example, one thing you discuss is that do you really need a ML system even to start with. Some of the questions are not really necessarily suitable for ML. I think at least I knew I don't do as much industry work. Chris is also an entrepreneur besides a professor. So he knows a lot about this. But even I know that sometimes, people actually-- when they really sell their product as ML system, but actually, the underlying system is not really using much ML. So sometimes, you don't really need ML. And when you use the ML, if it doesn't work, what do we do? And also, how do you deal with all the ecosystem? And we'll use the running example. We're going to have a spam detector and the question is, how do you detect spams. We use this example a lot in this course. So this is a seven steps for ML system. So here, again, it's a little broader than just the ML research. So you are thinking about designing a system that can actually work in practice. So acquire data, and you want to look at the data. And maybe want to create some of tune development set that as we discussed. You want to define or refine specification, which I'm going to discuss. In some sense, this is saying that you have to have an evaluation metric for your model. In one sense, you want your model to succeed. And then you want to build your model and try a bunch of models. Maybe you're going to spend a lot of time in step 5. And then you eventually are going to measure a model's performance, not necessarily only according to the specification you have defined in step 4, but maybe you're going to have other measurements. For example-- speed, training time, so and so forth. And then eventually, you have to repeat and maybe you have to repeat a lot of times. So I'm going to go through these steps relatively quickly. I only have 15 minutes. But if you're interested, you can look at the longer slides as well. So suppose let's say you want to decide what is spam or not. So ideally, you want a data sample from the data that your spam product will be run on. So you want to have your data to be somewhat closer to the final test data. So you don't want to just collect some spam data from 30 years ago and then use this data to choose something that can work not these days. But sometimes, this is not always available because you never know what the spam emails will be 10 years after. So you have to make some sacrifice. Sometimes you don't even have the features. Maybe your existing record didn't save everything. Maybe you just save the title of the email, didn't save the entire content. And that will limit your capability of detecting spam. And there are many legal issues to look at the data and this according to Chris. I think this is true as well. So you get it wrong on the first try. So sometimes, you'll find out that the data you collect are not the right one. You have to repeat. And then after you collect some data, you have to look at them, right? And this is something that we actually don't really teach a lot in this machine learning course-- looking at our data-- because we are mostly assuming that you already have your data, you make the right assumption already, you already know your data is Gaussian. And then you are in Gaussian discriminant analysis. But we never say, how do you decide whether you really make the assumption about the Gaussian assumption. But in practice, you have to do that because you have to see whether the data makes sense. There are many nuances there. For example, sometimes your data are not as good as you think. Maybe the format is not right, maybe there are some kind of like outliers, so and so forth. And only if you look at the data, you cannot see what was going on there. So actually, even in research, sometimes I experience this. So I think in one of my project, I think we just use the wrong data from day one in some sense. I think some of the data were just corrupted just by accident. And we are training on them. And only until one month, I think we realized that. So of course, in research, it's probably easier to detect that. One month is a long time for us to detect it, I think. But actually, you can easily detect them. But for real life cases, sometimes it's even harder. For example, you don't even necessarily have the tools to look at your data. You maybe have to build some tools to look at data. And you need to think about different subpopulations, maybe spams from edu emails or spams from .com emails and see what are the differences. So this will give you a lot of intuitions on what data you should use and what kind of models you should use. And do this at every stage because for example, when you really do the-- and this is also the reason why you want to build some tools to look at data conveniently. So sometimes, if you just look at it once, sure, then you can just maybe print out something. So but if you want to look at it many times, then you should have some convenient tools which actually eventually will reinforce and let it more likely to look at data. So I think at least in research, I also realize this. So if the data is very hard to visualize, then people are less likely to visualize the data. So sometimes, it requires an investment so that you can-- you can have this tool so that in the future, you have less of kind of cost to look at your data. And you should do this at every stage in many cases. I guess this is about domain knowledge where sometimes, some of the data requires expertise. I think there are some examples in the slides which I removed just to save some time. But in short, sometimes, for example, if your data is corrupted, only experts can know. For example, you have multiple data. Only experts can know that your data are corrupted. But from a machine learner perspective, the data looks fine. So I would talk about training dev test split. So this of course is something important for you to do. And in practice, it's a little bit less clear than in research because in research, sometimes you already got a split even at the first place. So you gather data, the data already has a split. But in real life, sometimes you have to avoid certain leakage. So for example, let me take an extreme case. So suppose your data has repetitions. So you have a million data. But actually, every data point is repeated twice. So essentially, you just have 500k but repeated twice. If that's the case, then you split the data, then you're going to see some repetitions between-- some examples and test will also show up in the training exactly the same. So that would be disastrous. So you have to kind of avoid these situations. And this actually happens in the Kaggle contests. So actually, in many, many-- actually, I try to do some of these Kaggle contests at some point. At least at that time, that's probably three or four years ago, maybe more than four years ago, maybe six years ago. At that time, many of the contest-- So if you look at the-- there's always some kind of forum for discussions, discussing-- [SIDE CONVERSATIONS] So in this Kaggle, I'm sure this happens more in the industry, which I'm not familiar with. But in even the Kaggle context-- so in many of the Kaggle contest, if you look at the forum, always after half a month, after a few weeks, someone will figure out some leakage just because some examples are very, very close to text example so that they just use this leakage to hack the number. So it's kind of like some kind of weird rule so that you can make the validation performance much better than you thought. And everyone has to use that. And it's kind of interesting. I don't know why. Everyone who found this kind of leakage-- they always post it in the forum somehow. And I don't know whether this is always true. But for a few cases, I've seen they do this. And then everyone else, well, you have to use this small gadget to improve their model performance because if you don't use it, your model performance is just not as good as others. So yeah, I don't know whether now they have-- maybe they have some better ways to detect this leakage to design the competition much better. I don't know. but this is something you have to pay attention in practice. And also, another kind of tricky thing is that what is a good split. So we have discussed whether you should do random split. So in research, as I said, random split is pretty much the best way you can do because you really literally care about the validation performance. But the problem is that sometimes, in the real world, the test set is actually not really what you care about. So that's why when you split it, you also want to split the training validation set in a way such that the validation set is something closer to the real test set. So I guess this is the situation. So suppose you are thinking about stock price prediction, right? So your final goal is to predict a price in the future. So that's something that you just don't have at all, right. And so now, suppose you have data between 2000 and 2020. So you have these So how do you do the twin validation split? So you just do a random split or you should do some other things? So one possible option is that you should probably split into, for example, 2000 to 2015. That's the training. And 2015 to 2020, that's the validation. Why you argue that, why that's a possible option? Possibly just because the last five years is more predictive of the future than the earlier years. So I'm not necessarily saying that this is the only option or the best option. But this is at least something to consider. So this is not what we do in research. And the reason is just because you care about the performance in the future, which is something you don't have access to. So I guess this is-- so the better split is they use the first 50 days to predict the last 50 days. And create a specification-- so I think this is mostly related to how do you define what you want to predict and what's kind of the goal. So in many cases, you can use machine model in many different ways from what you wanted to use it. And also, you care about different perspectives. For example, what is the spam? So the definition of spam sometimes are different between different people. So maybe do you think an ad from, say, Google-- do you think that as a spam? Maybe I will think like that, but somebody else probably prefer to receive some emails, some ad emails with very low risk. So you have to specify exactly what you want to predict. So what is really the definition of spam? And you don't want to have ambiguities at least from these days perspective. So machine learning models don't like ambiguity. So you really want to have a clear cut what is a spam, what is not. And also, what level of expertise is required to understand it because if you specify the spam, you can have a definition. But if your labeler are not able to label the spam account, your definition is not going to be useful. Suppose you have a very, very complex definition of spam. And then you say, I have this data and I ask the labelers to label them. But the labelers cannot execute my definition of spam easily. That's going to be another issue. And also, the specifications-- so you use the specification you want because you want to-- you use the specification to define a set of examples because eventually, if you just have some kind of description, text description about what is a spam, that's probably not useful. You have to really have a set of test examples, and the test examples have labels, and that is really your definition of spam. And for example, one of the quick and dirty test here is that whether your definition of the spam can pass this so-called inner annotator agreement. So basically, what you say is that you write down some definition, and then you select some N randomly selected. You get some examples of emails. And then you ask three annotators to see whether they can agree on which email is a spam or not according to the definition you give them. And often, you don't really get that high of agreement. You don't get 100% agreement. In many cases, people's interpretation of the same definition would be different. And let's say you have 95% agreement. I think that's already considered to be great. And then you have the question becomes whether it's meaningful to shoot for some accuracy more than 85%. If the annotators don't agree with each other, only agree with each other 95% of time, should your machine learning model do better than that? Actually, sometimes you can do better than that just because humans are sometimes have less accurate interpretation. Sometimes, machine learning models can do better. But typically, you probably shouldn't shoot for much higher than 95% in many cases. And then you're going to do this iteratively. For example, you have to kind of examine the specifications, you're going to look at what's the disagreement, why you have disagreement, maybe that means you have changed your specification. And the last question is kind of interesting-- do you train the people or the machine. Eventually, at sometime, if you have a lot of different-- sometimes, you have to train the people to label them correctly, right? So for example, even the image classification problem-- you have these clearly defined labels. So dog, cat-- but once you go to the breeds of dogs, some of the labelers don't really recognize different breeds of dogs. So you have to train the labelers in some way. So I have a friend who did this in PhD. And basically, he has a lot of training documents for the labelers, the Amazon Mechanical Turkers. And they have to actually ask the Turkers to pass some exams to be a labeler for them. So it's actually kind of complicated. So I guess I'll be quick given that we are almost running out of time. And then we'll do the model. This is the more machine learning part. So you want to implement the simplest possible model and you keep it simple. And sometimes, I think this thing don't get bogged down in new models use them to understand data. So sometimes, the problem is that the models are not only the end goal. Sometimes you want to use the model to understand what are the problems with the data. And sometimes you can fix the data, and then the performance becomes much better. So this is a whole loop. Your bottleneck may not just be only about the model. Sometimes it could come from some other places. Maybe a data, maybe the specification, maybe the train set split, so on and so forth. And you have some baseline so that you know why you are doing. And you need to do some ablation studies so and so forth. And then step six, you need to measure the output. And you have to measure the output so that you don't make mistakes twice. And you want to catch new mistakes as soon as possible. And you want to measure different things and simple things. For example, a bunch of quantities you carry about here and so on and so forth. And this is probably one thing that-- is one challenge that we are really facing these days about machine learning model. Maybe I would say probably one of the most important challenges. So the reason is that you have a distribution shift. So your training and validation distribution are very different from the distribution that you will test eventually. Or maybe they are similar, but there is some kind of special subpopulations that make them different. So for example, you were training on San Francisco street views, and then you test on Arizona street views. For example, when you build an autonomous driving car, then you train on some street views, and you have to test on some other places. And that creates a distribution shift. And then you can have surprises. And there are not so many good ways to do this, except that you have to be careful about it and design new algorithms. And this is incredibly hard where there's no real solutions industry. I don't think there's real solutions in research either. So I think we're going to have a one guest lecture by James Zou about the robustness of machine learning models. There are a lot of recent work on it. But so far, we don't have-- I think we have definitely better algorithms to be more robust. But I think the performance is probably-- the robustness is still not ideal enough. So I think I would just jump to step 7 so you have to repeat and look your data. I released the longer version of the slides if you are interested in some of these things. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Exponential_family_Generalized_Linear_Models_I_2022_I_Lecture_4.txt | All right, let's get started. So today, we're finishing our first kind of piece of the tour of the very basics of supervised learning. We're going to talk about these family of models that's called the exponential family of models. And why we care about these models, as we talked about, is they're going to allow us to generalize basically the kinds of models that we were using before to a wider range of error modes, OK? Of different kinds of errors. And they'll also come back and play a starring role when we start to tackle unsupervised learning, where we don't have access to a target variable. And the underlying mechanics that we'll use here will set us up quite nicely for that. Just in terms of pacing, in terms of the course, what happens is I go away for a little while, [INAUDIBLE] going to come in and talk to you about a bunch of different things-- kernels, SVMs, and deep learning. And then I'll come back to teach you a little bit about the unsupervised learning piece, which is, again, like a two week block where we see, from first principles, how those things work. And that area, I have to say just as a plug for what's coming, that area is something that's been really exciting, kind of thrilling over the last couple of years, how much we can learn without label data, or with really weak sources of data. That's been like a revolution in machine learning. So hopefully I can share some of that excitement with you. The thread is up-- the lecture thread is up on Ed, if you want to ask questions, as usual. And everything is online before the lecture. I didn't put out a template today, because I'm going to handwrite almost everything. All right, so what are we doing today? We're going to learn about these exponential family models, and they're basically going to be what you already with slightly fancier notation. Basically we've been ramping up the fancier notation each time, and generalizing, as appropriate, how we want to go through them. We'll do the definition and the motivation. And the definition, at first, will look simultaneously a little bit weird and kind of like, oh, that doesn't really mean anything, it doesn't have any content. And then it will also look to you like it's impossible to satisfy. And that's kind of true, right? So these are fairly interesting objects we're going to be looking at, but they have a very nice kind of canonical form. We'll then do a bunch of examples, a couple of different examples. And the notes here, the master notes here, are really good. I would definitely recommend going through them, the type notes, just that you work through a couple of details. This is something that you're going to get a high-level piece of how we go through it. Just do the calculations once, and you will be convinced of like all the different claims that are in the lecture. If you try to reason about them without doing the calculations, it just makes your life more difficult than it needs to be. So just go through it once. It shouldn't take too long, but I'll give you kind of a high-level tour today. So we'll do that definition of motivation. We'll do a couple of examples. And then last time, we were talking about this question of how do we deal with multiple classes, right? Last time we were talking about binary classes, yes or no. Now, if we want to have multiple classes out there-- you want to know if there's a dog, or a pig, or a horse, or whatever in there, this is called multiclass classification. And we'll talk a little bit about our friend softmax. What you really get out of this is that it will look kind of to you in the end like, oh, OK, that seems pretty reasonable. But you'll learn about some encoding that is fairly widespread, called the one-hot encoding, which you would need practically if you were going to actually use any of these kind of things. I believe also your homework is out, but don't quote me on that. I think it's out now, and I saw there were some questions ahead of class about that. Any other questions before we get started? Oh, please. Just a quick question in regards to the homework. Oh yeah, I'm the wrong person. You're free to ask. Like, I'm just telling you I don't-- [INAUDIBLE] Oh yeah, go ahead. Yeah. So we are blessed at Stanford with many great things. We have wonderful weather. We have incredible faculty. We have the best students on the planet. And we also have some great core support, and so I have no idea. All right, so let's see what's next. OK, so the exponential family. Now I want to be clear, I have mixed feelings about how I present this. Because on one hand, I want to convey to you the unbelievable historical significance of this, and why you should this kind of by rights of machine learning. On the other side, there's an argument to be made that a lot of modern machine learning is not going to use this formulation and this framework. But if you start to read papers, it's canonical enough that it will come up in various different places, OK? So I don't want you thinking like, oh, this is all you can do when you model machine learning. This is, like, where the field is stopped. It's important historically, it is very nice to understand, it has a bunch of properties we care about, but it's not the state of the art, right? It's not what I would-- go home and use exponential family models. It's weird that I go home and use any of these things, and I do, but it's not this one. All right. So what we want to do is we want to have the following idea here, is that-- and this is why it was-- it's so beautiful, and will come back as a form that we want to think about. If p has a special form, which I'll show below-- special form-- then some questions come for free. Some inference, learning come for free. What do I mean by for free? I mean what you already know automatically applies to them. And when we start to worry about more complicated models, this will form a subroutine that we'll use again and again. Like oh, if we can reduce it to that form, then we're in good shape. That'll be the way we get to things like unsupervised learning. All right. Now, the form looks like this. OK? Now, one thing I should highlight as I go through this-- so this is the data which you know. You already know this character. So data are labeled. This thing is called the natural parameters, OK? And I'll define the form in a second. The reason I want to highlight that is one place where you're likely to get kind of tripped up when we go through this is that there are three sets of parameters. And I'll come back to that later. So if you're confused, there'll be natural parameters, canonical parameters, whatever. Doesn't matter. You'll see them all written down at one point, and that will kind of explain the mappings between them. But there are many different names for parameters in this lecture, and that's like the essence to understand it. And the reason that's important is the natural parameters, if you like, are so we can write this form-- this functional form. b of y exponential-- this is where it gets the exponential name. t of y minus of a of-- OK. So I'm going to unpack this for a second, OK? Now this form says, basically, my probability distribution factors, if you like, or can be written in this form. Not every probability distribution can. The way you show that a probability distribution can be written in this form is you write it in this form. There's no secret shortcut here, right? You have to be able to express it as some linear thing in the parameter. So this is the parameters here, these natural parameters, times sum T of y-- I'll unpack that in one second, what T of y is-- minus this thing, which is the partition function. So that's what it says. It says that your function, right, there are many functions that are in the world that could be probability densities. This one has this technical form. We'll unpack this. This shouldn't be like obvious that these things are important or exist. So T of y is called the sufficient statistics. Sufficient statistics. Now in this course, primarily we'll use T of y equal to y. We won't kind of massage the data. But you can kind of think about T of y is capturing everything that's relevant to your data, right? These are the things that you're modeling that are in your. Data. And so in this example, like, we're keeping everything. T of y be equal to y means just its y itself. Now, another thing people get confused about. This is necessarily the same dimension as eta. Right, why is that? Well, we take their dot product. This is a dot product here. You can think about this. I'll write it above, just so you're clear. Also you could write it like this, if that's more clear to you. It's an inner product between the two, OK? So to take the inner product, and for it to be meaningful, those have to be vectors of exactly the same dimension. So if you want to have so many parameters, you have to have so many sufficient statistics. OK, so far, so good? b of y is called the base measure. It's not the most critical element here, but you need it. The intellectual content of that is that B depends on y, but it does not depend on-- not depend on-- eta. So it says, basically, if you like what's going on here, is this term has all the interactions with y. This term has some interactions with eta. And the only way that eta and Ty interact is through this term, OK? It's really a statement about how they interact. These functions are pretty powerful, right? Those are just arbitrary functions. But when they interact, they interact in a linear way. Sometimes these are generalized linear models. All right, want to keep staring at it? This character, a, is often called the log partition function. Now this is a weird comment that I'm going to make, and it does not depend on, well, its friend. Doesn't depend on y, just as I just said. It does not depend on y. OK, now this thing is picked effectively as a way of normalizing the distribution. So it's a probability. So it sums to 1 when I integrate it, or I sum up over the discrete values all the y's, it's going to sum to 1. So that may make you think that A, in some way, is not the star of the show, it's just this thing that kind of like adds up everything. But actually, this log partition function contains almost all the information, it turns out, of the function. And that's a weird thing, but it's true. So we'll talk about that. So this contains a, and then you have this linear interaction term where the statistics interact with the data. OK, now, just to make sure it's clear, y, a, and b are scalars-- scalar functions. So a of eta, b of y, are scalars, OK? Just to make sure the types are clear. And these two characters have the same dimension. OK. So far, so good, right? So let's highlight where everyone-- where all these characters are. So you just see them visually the same n-wise. OK, so far, so good. All right, now here's the crazy thing. Many of the distributions that you've encountered in your life-- I don't know how often you encounter distributions. But if you encounter them at a relative frequency, a lot of them are of this form. And that was a huge win for statistics because they said, all the stuff we're doing on distributions, a lot of it can be mapped into this what looks like, as I said, a trivial statement, and at the same time, seems also impossible that it would be there. So let's look at some examples, and see how we map into this form. Probably you're thinking about-- you are going through some distributions in your head, if you look at them, and you're like, I'm not sure that it is of that form. So let's look at the simplest version of that, and see that actually, yep, these things are of the form. So let's look at some examples. Before doing that example, I just want to-- are there any questions about the content of this? And I'm happy to defer, so please feel free to ask a question, and I can tell you to defer if there's something else. Clear enough? Yeah, please. Of eta x disappeared because, I guess, I didn't know we were-- There's no x in this equation. Don't worry about it. He'll come back in a minute. Yeah, wonderful, wonderful point. Yeah, there's no x here. There's just a y, which is your data. There's no x in features, and they're going to make a-- they're going to make an appearance later. They will be one of these parameters. Yeah? Also, in [INAUDIBLE] there was a couple of people-- are we talking about the whole [INAUDIBLE]?? Yeah, it's a PDF. Oh. It's a PDF. And it says y given eta. It's not given. Remember, it's our semicolon. Those are the parameters, remember? There's a bar which says given, which is condition, and then there's a semicolon, which says these are parameters. Yeah, see these etas are the parameters of the model, right? And they're going to sweep up all that nasty notation that I talked about last time. Wonderful questions. Please? [INAUDIBLE] and we said that t of y is equal to y. Does that mean qy is also [INAUDIBLE]?? In some of the examples that we'll see later, yes. Yeah. It doesn't need to be. Just to make my life a little bit simpler in this class. It's not a requirement of the model. Awesome. Wonderful questions. OK, this is a requirement, just so we're clear. These have to have the same dimensions, otherwise it doesn't make sense. I'm not saying something deep. I'm just saying, otherwise it will-- your brain should seg fault. Like, what am I taking a thing and multiplying it by? Because this has-- this entire expression has to be a scalar for it to make sense. OK, great. So let's take a look at an example. So probably the first example that you should think about, I would guess, are Bernoulli's, right? Bernoulli random variables. So what are these? So we're going to have some phi probability of an event, right? I guess the most common thing people use is like flipping a biased coin-- some probability that it's head, some probability that it's tails, something like this. So here, when we've seen this before, phi, it had this form. Remember this form that we used? Maybe this looks a little bit familiar. So the probability of y for it's this term. When y is 0, it's this term. Just a compact way to write it. p and 1 minus p is all I'm writing here, if you think about a heads-tails distribution. All right, so this is not obviously of the form. Let's go get the form. Let's go get our friend-- from up above. How are we going to put it in this form? And I'll draw a box around it. All right? Now when we do this, we will be like, well, we've got to get an x somewhere, right? I mean, that seems pretty natural. So we're going to put an x in there, and we're going to x above y log y plus 1 minus y log one minus y. OK, so far so good. We're making a little bit of progress. Well, the problem we have right now is the y's are interacting in two places, right? So what do we have to do? Well, we have to bring all the y's together because the y's and the phis, the parameters, like, they're not etas yet. We'll give ourselves some freedom, but intuitively, like, the parameters should all be looped, all be somewhere together. So what is that going to be? Phi 1 minus 5 plus log 1 by phi. Now we seem to be in kind of the right situation because we have this character here, which kind of looks like the interaction terms, right? y and model parameters are interacting. And we have something here that's isolated that's just a function of the model parameters. Now it's going to turn out that phi is not going to be equal to eta, right, because this thing here is not of the right functional form. So we had to figure that out. So what's our what's our best guess for what this eta will be? Well, why not what it looks like? So we want to set-- to show that it's of the right form, we postulate that this thing is actually equal to eta, OK? Now, that means this thing here has to be some function of eta, right? Now that seems, at one hand, kind of obvious, right? Because it only depends on phi, and that's a local transformation for phi. But we're going to solve for it explicitly, if that kind of hand-waving implicit function theorem kind of argument bothers you, OK? All right. So here, our goal is we're going to take Ty is, again, going to be equal to y. We're going to take eta that way, and we want to understand what is the value of a. And what we claim it is, right, a of eta or of nu is we claim that that's going to be minus log of 1 by phi. All right, let's check that claim, right? That shouldn't be hard. And so why does that work? Well, copy this guy. Oops. So this goes to, well, I just take e of both sides, 1 by phi. Then I move this thing across, right, so that I can do whatever I like. Then I want to make sure I get all the phis on one side. So I get phi 5 times e eta plus 1. And I'm off to the races, right? So now I have phi is equal to 1 plus e to the minus eta over 1, OK? Now, this means, by the way, that log of 1 minus phi, well, that's going to be equal to-- oh, sorry, did I screw that up? Let's make sure I didn't screw that up. I'm just going to redo this piece just to make sure, because I didn't do it in my notes. So it's going to be-- oh, no. It's 1-- I'm right. E to the [INAUDIBLE] OK, so now I take 1 minus this, right. So that's going to be equal log of-- this is a lot of arithmetic to do in one sitting, but that's going to be log of 1 plus e to the eta. OK, so far, so good. So why does that satisfy me? Because that's a function of eta, right? Now as I said, it's kind of straightforward from here, because these things are just functions of each other. But this is technically what we needed to do to show that these things are actually equal. And in fact, we're in good shape now. OK? So what am I saying? I'm saying this seems trivial on one hand because you're like, wow, I could just put in whatever I want. But you can't put in whatever you want. You have to first separate out the interaction between y in the form, and then you have to be able to pull out the term that depends only on nu here, the parameters. Does that make sense? Let's see another example. Please? I guess it's fine to have eta in the follow function of the parameter, and not necessarily the parameter [INAUDIBLE] Right, and it is a function. Yeah. Right, that's just a function of a. Just this happens to be a different one, OK? Now here's what's weird. I don't know if I should really mention this, but you should look. This thing actually-- remember I told you all the action was in there? You can kind of look at this encoding. This is the log partition function. So a log is what we expected. This 1 plus en thing, well, if you think about the different weights on the spaces, this is actually encoding the fact that there's like a positive and a negative state. And try and think about why that might actually be the case. In fact, if we start to compute the derivative of this thing, it's actually the expected value of y. That turns out not to be an accident, OK? All right. Before we get into that, let's go down one more. So what did I hope that you got it from this. It's a tedious thing to walk through. You can and should walk through these on your own. There are three examples. You should also walk through the logistic kinds of examples, and the others. Basically, the whole thing is I want to make it super clear what the statement means. I don't expect anything here to be mind-blowing. I don't think like our use of fractions is going to change your lives. I'm just saying that this-- is the content of the statement clear, right? You give me a probability distribution in one form. I'm going to translate it into a same functional form such that it has-- satisfies these conditions. And then, in that case, this eta is now what's called in natural parameters, OK? And you're typically not given the function in the natural parameters, right? And you're going to be responsible, on homework and in other places, to do this. And the reason why is, if you can do this mapping, then a bunch of stuff gets easy. Inference gets easy, learning gets easy because now it turns out that you can show-- and we'll talk about in a second, you can do gradient descent on these parameters, and it's going to be concave. And that's wild that you can solve all of these models the same way, OK? So I just want to make sure that the functional form is clear. And the reason we're doing it is because it's going to simplify some stuff. Please? Isn't e to the t function to massage it into that form for-- Yeah, awesome. So could you use the T function to massage in the form? Now, in this class, if you find yourself doing that too aggressively, you've probably done something wrong, just as, like, a heads up, because we don't use it too much. But yeah, you could do that in T. If T were expressed in some way, and you were only modeling a piece of it as a result of this, and saying the probability distribution didn't depend on one part-- like, T was a projection-- you could do that, too. But this is pretty hard to get around. So I think you're thinking in exactly the right way. Kind of like, how do I get around and break this? And basically what it's saying is your interactions are arbitrary, and y-- arbitrary-- excuse me-- arbitrary nu. And the interactions between them occur in the exp, right, and that are linear. And once they have this linear interaction term, whatever the function T is, those sufficient statistics, that's what you're modeling up to. And some folks asked, and I answered some advanced questions on Ed, which none of you are responsible to know, about deriving things like why is logistic regression calibrated in certain ways in data. Those features, those sufficient statistics, are what feed into those arguments. So up to these sufficient statistics, this is how well you do. So when you play the game of massaging it, you're either throwing away information or you're not. In which case, that's what we do here. So it's a modeling choice. You could do it, nothing that prevents you. But it means something about what you're doing underneath the covers. OK, this is awesome. So now you learn something which, again hopefully seems sort of trivial. You're like, oh, I can take-- I can take that distribution of heads and tails, and put it into this weird functional form. And that will be interesting because one thing you should test your understanding of is, can you now, given a bunch of samples from heads and tails, estimate the underlying parameters? Computing derivatives here seems a little bit nasty, right? These are like phi's and y's that are up in things. It looks like a weird function. Once I put it in this form, all of a sudden like it's nice, and convex, and life is good. But I have to look at estimating this parameter, not the original one. Is that making sense? Please ask me a question if not. We'll see one more example, and we'll come back to it. This is the only important thing from what we need to do now. Let's look at the Gaussian, example two. We'll only do these two examples, OK? This is the Gaussian with fixed variance. This one's really good because, remember what the probability of y is? It's going to look like-- sorry-- 1 over 2 pi exp-- and there is a negative-- y minus mu squared over Let's make sigma squared equal to 1, actually, to make my life a little easier. OK? Just for no reason. Now how do we get that in our favorite form? So let me go copy our favorite form. Seems to be pretty close, right? I mean, we're in pretty good shape. What do we need to do? Well, we have to factor it in some way. So this constant, we can absorb anywhere. We don't care about that. We have to somehow pull this thing apart, OK? So how are we going to do that? So we're going to put the 1 over And then we're going to pull out the e minus y squared over 2. So I'm just going to factor this, right? This is going to be minus y squared plus u squared minus 2 mu y over 1/2. That's what this term is going to be, right? So it's straight multiplication. So I factor out the mu y squared. What does that leave me with? It leaves me with x of this character, mu y, minus 1/2 mu squared-- oops. Minus 1/2 mu squared. So what are our natural parameters? Well, eta is mu, which is why I accidentally wrote it right at the beginning, which would have been a little bit weird to do. T of y and a of n equals 1/2 eta square-- oops, mu square. Well-- oh, yeah. OK. Right, so because this is mu. Now again, notice I differentiate this thing. What's the expected value of this character? Well, it's mu, right? But when I differentiate ay, I get exactly back mu, which is kind of interesting. This is trivial here. In the last example, it was non-trivial. It was like a weird function that I differentiated, and I got back the actual probability distribution, right? Which is kind of bizarre that I would get that back. OK? By the way, if that's not clear, differentiate this function with respect to eta, and then see what you get. OK? Does that makes sense? So just to make sure we're verifying everything, yep, this looks right. This is my B term. Oh, sorry, this is my B term. I'll highlight different colors. So this is by, and this thing is the log partition function. And what I'm just trying to highlight is like, this thing contains a lot of information about the distribution in both of the examples we've seen. Please. [INAUDIBLE] No. It's a wonderful question. So right now, we're worried about the case where y is going to be a scalar, which makes our lives a little bit easier. And so we're going to look at that, but you just have to have that the Ty and eta actually resolve to a scalar that the same type. So if your y has multiple dimensions, then you need more natural parameters. And that's actually pretty important because that's why we call them natural. It's like, your problem has dimension D? Then you need D for your parameters. [INAUDIBLE] There is not, not. The thing is, I wanted to have sigma squared be fixed. Like, I didn't want it to change per data point. That was important, and so it was easier to just write 1 there. If you put sigma squared in here and it was just a constant, then you would just push it into the appropriate spots and be done. They would just fold into this. If sigma squared were something that we're changing per data point, right, like we were trying to estimate for every data point, not only its mean, but it's possible variance. Like, I give you a temperature reading, and I say like from all the data I've seen, I think it's 30 degrees. But I know that I haven't seen enough data, so I'm like plus or minus 2 degrees. If I've seen tons of data, and I'm very confident, I'll say plus or minus That is an estimation where sigma is part of the model, and then you would have another free parameter for it. Great question, and try and write that out. I don't know if that example is written out in the notes. If you get stuck or whatever, please send me a note. I'll write it up for you on Ed. Does that make sense that-- capture your question? Yeah. Awesome. OK. Yeah, awesome. So two questions on the live thread. The examples to go through on your own are the ones in the hand-- in the typed, written notes, which contain these and one more, but just go through them on your own. Like, I'll just wax poetic for one minute. If you haven't studied for a mathematically-minded course, the way that I always do it-- did it-- I still actually read textbooks, and course notes, and everything else-- is I read them, I watch the lecture, whatever it is. Then I try and remember those key spots, and I try to derive them myself. It's the fastest way to figure out what didn't stick. So if you're like, oh, I can write the derivative in two minutes, and you walk away, well, OK, you know everything for the lecture. If you get stuck, it's a really good signal that you don't do it. If you don't do it, and then that builds over time, something that you thought was trivial and you didn't actually put the time into we'll end up biting it, right? Just lesson that I learned from too many years of the university. Why is the derivative equal to the expected value? I'm not going to prove that. I will just assert it here. I just want to show that-- oops, sorry-- I just want you to observe that in both cases, it's true. It's a wonderful question. It just takes a little bit of arithmetic to show. It's a wonderful question, yeah. All right. OK, so I'm just I'm going to put those assertions in here now. So why do we care about this form? The first is what we said, inference is easy. The expected value of y given eta is the partial derivative with respect to the natural parameters of a of n, OK? I would encourage you to compute this on all the examples. Proving the general thing just takes a little bit of an extra thing. But on all the examples so far you've seen, it's clearly true. And then why that's true in general is because it's the log of the sum of all the possible outcomes. Proving it for continuous stuff takes a little bit more effort, OK? So don't worry about it. But we've seen this is true, OK? And this pattern holds. The variance is also the second derivative of an, OK? Now this pattern, you may think, holds. Like oh, the third power's got-- no, it doesn't work that way. Just these first two. Those are the only ones that matter. These are only ones that work. So why is this so interesting to me? One, because once you take your distribution, whatever the crazy distribution is, however wild it is, and distributions can get pretty insane-- there's uncountable many of them-- you put them into this form. You basically have a mechanical procedure to do inference, and to do variance estimation. Inference is more important to us, but that also means that you can do learning, right? And in fact, learning is well defined. OK? In particular, this function is going to be concave in eta. OK, let me write it this way. The MLE-- remember we did the maximum likelihood estimator for all those things previously-- is concave, OK? Please. [INAUDIBLE] That helps. Yeah, so that is definitely a piece of it. You have to do one extra step, but you're exactly on the right thing. That's exactly how you can go and prove it. Just compute it directly, and see that it's positive in the way that you think. If the second derivative is positive everywhere, then it's then it's convex, but yeah. So you have to-- there's a negative in front, but that's a small-- Wonderful. Yeah? [INAUDIBLE] Exactly right. If you remember last time, the way we framed all of our estimation problems was take the log likelihood that was there, l of theta, and then use gradient descent on that. And this is basically saying that the resulting formulation, if I use the natural parameters, is always guaranteed to be concave. Please. [INAUDIBLE] Yeah. So the thing is you can compute this directly by looking at exactly when you compute the derivative, and pull it out. The way that you do it-- I'm not going to prove it in class, but I'll just tell you. It's not a mysterious statement. What you do is you look at for the discrete case. You look at the fact that it's a log partition. That means it's the sum over all the possible worlds, meaning all the possible ways that y could be assigned, right? So there's going to be a term in there for each one of them, because it sums over all of them. That's what it does. So maybe this is getting way too abstract a mysterious. Maybe it's better to prove. So if I look here, look at this part of the distribution, this may not sum to 1, right, if I just started to do it. So if I took and summed over y-- oops. If I sum y of this expression, let's call this gy, it may not equal 1. It's not guaranteed to be equal to 1, OK? Because it's just some collection of values, and some other stuff. This thing is-- the function, this eta here, makes sure that it's equal to 1. So it's the scalar, as we were talking about before, that make sure whatever this sums to, it's going to divide over it. So it's sometimes written as 1 over z, the partition function. However that means that the way you get it, one derivation of it is you sum over all the different values. So it is like the sum of everything over 1 is actually what-- so this character has to be equal to this, right? Because it has to cancel out. It has to be actually equal to 1 when I compute it. And that means that it's basically of the form, a sum over all the possible values of y. And so if you compute the derivative inside, when you take the log of that, each one of the y's is going to come down next to exactly this functional form. All right? So if that's too mysterious, I didn't want to prove it, because it gets strange. But I'm happy to write out the proof. It's super straightforward. OK, awesome. So all I care about is this, though, the thing that I'm trying to get across-- because I'm trying to give you a guided tour. I don't want to get to-- I want to do enough details that you see all the pieces, so you can go back and understand how they work, but I don't want to get bogged down in things that I don't think are like super critical for you to understand. And also, I don't want to do things that I think you should do on your own, because doing them mechanically will teach you better than me like inscrutably writing for the hundredth time how to prove this. But if I'm wrong, and you want to watch me inscrutably write, I can-- I don't know, you can log in to twitch stream or something. Awesome. All right, all good. So clear enough, like, why we did this, barring these assertions, if you get your probability distribution into this form, then all of a sudden you get inference, and learning, and a bunch of other stuff for free. However, one of the things is, as was correctly pointed out, there was a little bit of a bait and switch here. We started, last time, to think about various different models, and how we put those models inside. So where did the data in x go? And that's what we actually need to figure out here, OK? All right, so let's talk about generalized linear models. OK. So the point that I really want you to get across is, these are all design assumptions, OK? So these are all design choices that you can actually make in your model. And we'll get to them, and talk about what you want to do. And you can also think about them as assumptions. All right, so first, what we're going to say is we're going to claim that the distribution of our label, given x for some parameters, theta, follows an exponential family, OK? Now I claim, without much justification here, that this is an important family. Now, you can say it's an important family because, as we'll talk about, many different data types fall into this that you've seen. So if you look at binary things, you want to do y is binary, well, that's Bernoulli, right? So we looked at that classification. If you want to have real-valued y's, well, we have a couple of them that we could use, but we saw Gaussians. If you want to do counts, like you're actually counting how many people walk by a particular thing, or how many packets arrive at a server, or something like that, then that's a different distribution. It's called Poisson, OK? If it's-- all you want all the whole real line, right? You want real positive line. Well, there's two different fancy distributions that are called gamma, right? You don't need to know these per se, but what I'm trying to explain to you, kind of proof by writing a lot and gesturing wildly, is that these-- oh, sorry. Exponential. Leplace is also in this. If you want distributions, there's one more. Then this is called the Dirichlet distribution. Dirichlet. So what am I trying to get across here? These are probably most of the distributions you've heard of. There are more that I'm not writing down, but they all fall into this exponential family. Now one hypothesis is that we figured out this technology, and that's why we described distributions this way, but that's actually not true. We went the other way around. We were doing things ad-hoc for each one of them, and this tied them up and put them nicely together. So you pick your error mode, and the way you pick the error mode is it has to have the right type. If you're observing binary data, you want to use a Bernoulli, right? Or something that has a binary type. If you're observing real data, you want to use a Gaussian, counts, Poisson, and so on. So there's a data type mapping here. The second thing that you have is that your natural parameters-- and this is where they come back in-- are going to be of this following form, with theta element of Rd plus 1, and x also element of Rd plus 1. OK? So here we're going to make the assumption that, after-- subject to noise, our model varies linearly with some underlying features. Now you're going to see later-- this is actually a more powerful assumption than you realize. If you take your model to be very, very large, and have a huge number of features, almost everything becomes linear in that space. High-dimensional geometry is very, very weird. So it's not like if you think in low dimensions you're like, oh, there's only so many lines I can draw. I can't separate out my data. If you take your data, and put it up into a huge dimensional space, odds are it will be linearly separable. There'll be a line through it. We'll back to that, OK? So this is just the thing there. And then, three, once you've made that assumption, your inference is super easy. Right? Test time. You output E of y given x data, OK? Said another way, h of theta of x equals E of y given x of theta. OK, so hopefully this makes sense. So I'll walk through exactly what's going to happen in one second, just to make sure it's clear, but this means that we're doing this prediction. And one thing that we sneaked past you was that, when we went to logistic regression, all of a sudden, we started with these hypothesis where, instead of returning like yes or no, we just returned a probability distribution over it. And that was a change, right? When we're doing regression, we return to just a value. Here we're just returning, actually, like your probability that you think y has a particular value. That's what you do for inference. That's how inference is defined. So let me make sure this is super clear how this works. Your data comes in as x. It then goes into your linear model. You compute phi transpose x with your parameters. This is your box. You get out eta, right? Theta T becomes eta in the parameter you had before. You feed that to your exponential model. Ex model does whatever it does. There's b's, and T's, and whatever in there, but you now your value for eta, right? You can do whatever you want. I'm using both. And then if you want to train, you do max over phi of log pyx phi. If you want to do inference, you do eyx phi. This is learning, this is in inference. OK, so all you pick here is you pick your data. You pick the features, and then you run this procedure, and everything's kind of automated for you in the sense that-- yeah, in the sense that you now a general recipe to do maximum likelihood estimation and do inference. And it's not obvious how to do that, by the way. There are scenarios where we don't know how to do maximum likelihood estimation. So right now, your universe is like, oh, everything you shown me, you've done maximum likelihood estimation. It's been really easy. I'm like, yeah, that's fair. But there's a big world out there of stuff that is hard to cram into this. And so what this says is, if you can put it into this form, maximum likelihood estimation and inference becomes super, super easy. And learning here has a nice form. Data j looks like this, and you can directly check this. Plus alpha yi minus h theta of xi xji. So why is that the case? Well, we saw that it was a case in all the other models. You just have to go through, and compute the derivative, and convince yourself. But now that it's in this form, just computing the derivative is with respect to theta as you go through here because it occurs only in the state of Tx-- we'll get that out. So one of the reasons I was delaying, proving it in the special cases was because you have to do all of these transformations to put it into the right, nice form. And then when I compute the derivative, my life gets really, really easy. If you compute it in the natural parameters, it looks weird, but-- I mean, if you compute it in the original parameters, it looks weird. But if you compute it this way, life is pretty good. And this, by the way, here is always this thing. Right? Always hypotheses. OK. So far, so good? Please. I want to ask, is this kind of different from what we did in [INAUDIBLE]? Yeah, so let's-- actually, it's a great question. Let me do one more example, and then hopefully that will become clear about how they relate. I want to-- so let me just run through logistic regression, then. I think that will probably maybe answer that question. So it'll show exactly how they fit together. Terminology. OK, so there's the model parameter. This is phi. There's the natural parameter, which in a linear model, we always substitute-- a generalized linear model, we always substitute. So this was the eta before. And then there's the canonical parameters. So these were like phi for Bernoulli, OK? Or mu and sigma squared for Gaussians. OK? And this g here, we're going to call the canonical response, and this g inverse. So g is called the canonical response. OK, so why do I do this? I want to make sure it's really clear what all the pieces are, and what's going on here. There are some model parameters. That's the thing that we're going to solve with respect to. That's the thing that we're going to do the gradient descent on. That's the thing that we're going to do the h theta with respect to. Theta is then dotted into the model or the data. That's what this x. This is data right here. That becomes the natural parameter, which then goes to the-- which the exponential model now tells you how to operate on, and then I can do everything I want on the natural parameter. That's what tells me the distribution. And so in the case we're doing, logistic regression, which we'll talk about in one second, you have a linear thing, and then your errors are of the form-- I make an error that I sometimes switch the class, if you like. I get the wrong answer with some probability. And then there's this link, which are these canonical parameters. And this is the content of what we're talking about here is you write them down in these canonical parameters, and then they have-- people write them down in whatever messy form we found them. And I'm asserting that a lot of them can be put into that nice exponential form through what's called this canonical response function or its link function, and that allows us to treat them in the same way. So this is super important because when you encounter one of these distributions, you probably encounter it in this form, you have to put it into this form, and then that lets you do everything that we just talked about in one clean way. Learning becomes this nice, simple rule. Inference becomes this nice, simple rule. OK, awesome. All right. So let's look at logistic regression, just so it's super clear what that means. All right, so h theta x. Well, we said it's the expected value of y given x and some parameters, theta. This theta-- so we have-- so there's theta equals And then the model, oops-- e to the minus theta Tx. So this piece here, theta was our-- sorry, theta-- so this was our model parameter, this was after we transformed to the natural parameters-- that's this character-- and this is what we wrote down last time. So when we went to do logistic regression, remember we had a loss function that looked like this. This was our sigmoid or logistic function. And then what I'm saying is that we-- right now, to get the derivatives and do everything else, which I skated on last time, I now no longer have to skate on because I just made it more abstract. I transform it into this parameter space, and I'm good. So in one hand, as I said, it's totally trivial. I'm just doing a transformation of how I represent the numbers, but it also seems weird that I can do this. And I'm asserting that in these cases, I can. And that's what allows us to go and treat all those different distributions in some way. So if I give you some features, you dot forward, you learn a model. And then you have an error like, I'm looking at counts, I'm observing them. Counts have a very different distribution than the errors I would expect on 0, 1 things, or the errors I would expect on a linear regression. I just plug that model in, that natural model in, and out falls a pretty reasonable class of machine learning models. And you may say, the thing that people usually react to is they say something like, well, what if I want to do something that's more complicated than linear? But linear is pretty powerful. I can take my features and square them. I can multiply them together. It's still linear, right? I can take my-- feature five could be the product of the preceding seven. And so this turns out to be a wildly popular class of machine learning models. In fact, there are entire books that are written about generalized linear models. And there's a citation to McCullough, which is the standard reference. I'm not sure I advise necessarily reading it, but it's the standard reference. Not because it's not a great book, just, like, it's long. OK, please. In your personal opinion, do you think that there will be more improvements in the theory of machine learning, such that quadratic and other models are more applicable the new school [INAUDIBLE] is extremely powerful, it doesn't make a lot of sense? Yeah. [INAUDIBLE] Man, I really feel like I should buy you something. So that's a great question. So there's a lot of folks that have done through the years-- as we'll talk about when you get to kernels, things like polynomial kernels and exponential kernels. And those are very powerful ways to model the world. What I was kind of hinting at before is the linear model has this sneaky out in the back, which is get to pick the features. So if up you know up front that like you want-- that the squares of the temperatures are more indicative than the raw values, you can just put that into your model, and learn more and more features. And so it's not a question of eventually we're going to get powerful and use those features. We can use them today. The crazy thing is we can reduce a lot of the things you would naturally do to this model. And you'll see eventually, someone was asking about these infinite dimensional feature models. Those are what kernels are. You can reduce those to linear in an infinite dimensional space. So it's wildly, like, important there to do this. The other bit, which is also my personal opinion on these things, one of the things that has really bitten me again and again is that simple stupid things work extraordinarily well with a-- given enough data. And in fact, the trend has been larger and larger amounts of data for the last 40 years. And every time we think it's going to run out of gas and get fancy, a bunch of fancy people academics start writing papers about clever ways to do x, y, or z, and usually they get smoked by more data, and linear stuff. And there's a great paper about this that I can post of different eras of machine learning when this happened. And like, the thing that's remarkable about modern machine learning to me is not how sophisticated it is, but how we basically do the same thing, and just pour in mountains of data. Like right now, there's a particular model that's very hot. In five years, will it be hot? I don't know, right? Maybe. Five years ago It wasn't-- I guess five years ago, it was kind of hot. But, you know, and then a new one will come. But the pattern of like we just dump in all this information, and just optimize the crap out of these parameters, that works really well. So the thing that I'm trying to get across there is-- as you said, is the future of machine learning seems to me to be more tied with understanding relatively simple applications when we have huge amounts of data underneath the covers. Yeah, but I'm a zealot there. I taught the large language model course we taught last quarter. So I'm a believer. You can think it's nonsense. It's a personal opinion, but wonderful question. Yeah, awesome. Others? All right. So again, you know, same thing. I'll do Gaussian, just to stall, to make sure this is clear, because I hope-- if I'm very optimistic, I mean, you're like, oh, this is obvious. There's no content here. I understand it perfectly. If we are in that universe, I will be extremely happy. If you're baffled, please ask me a question. [CHUCKLES] This is a Gaussian. How do I do prediction? Well, when I have something, I pick the mean value that I would have. That's exactly the same piece that's here. How do I do the estimation? Well, that's exactly theta Tx. That's what we were doing before, when we fit a line. Same thing. So all that's-- all I'm saying is what we've done so far in the first K lectures we've now compressed to basically one equation, one schema of this thing. We now know how to do inference, and we know how to do learning. And maybe it's tough to appreciate in the sense that you're like, well, I didn't encounter 1,000 models before. But now, all of these different models can be shoehorned into this, and that's quite powerful. And we'll use that quite a bit later when we do much more advanced stuff. All right, awesome. If there are no questions, I'm going to move on to multiclass. All right, so the last thing I want to describe here, which is important for you, and I think you have to use in your homework, is how we deal with multiclass classification. And I should say the trend has been, in machine learning, not only these big models that I was just excited and ranting about, which you can take or leave, but is that you train models on a variety of tasks more broad than even a variety of classes. You train them to do many things at once. In fact, weirdly, as I may have told you, the thing that we seem to be doing as a field right now is training a model to do something, task A, and then using it for task B. And weirdly, that makes the model more robust. So the typical task that we do there, by the way, is predicting the next word, which is a really-- seems like a really basic task. You look at a sentence, and you produce-- you view the words as individually. This is oversimplified, but how it works. Every token is a class, right? So it's the word cat, dog, whatever. You just take a vocabulary of 50,000 words let's say, and then you predict. Which one do you think is likely to be in the next space? So I read the first part of the sentence, and I predict. This turns the entire web into a training corpus, right? Because now any piece of text, I can evaluate in a multiclass way. We train it just to do that, and weird behavior emerges, like it can write pieces of code for you. It can answer questions in narrative form. And only when we train it on lots, and lots, and lots of data, and it's a little bit spooky. So anyway-- so this is what multiclass is for, and this is actually something that we use every day. Awesome, wonderful question. What is the disconnect between the power of linear models, and the need for non-linear components in a neural network? Wonderful question. So right now what we've done is we've said-- and the exchange was wonderful-- hey, what about more powerful feature representations? What neural nets basically are, and why they're so amazing is they take your data, if you like, and they pick out what those features are, what those x's should be from the underlying data. And then usually on the end, you just have a linear model. There's little tweaks and variations, but that's pretty much what you have. So the question of where does the x come from, you give me an image of a cat, how do I get good features about images? That's what the neural net is actually solving, and that's where we need non-linearity. But it comes in at that piece, not at the prediction piece. Great questions. All right, so here we're going to look at discrete values. If you're familiar with distributions up to sum fixed k-- so we have a cat, dog, car, I don't know, whatever else. Oh, I wanted one another one. Oh, bus. OK. So here, k is 4. All right? So I want to predict among that set. It's kind of weird I promise you that you're only going to see a cat, a dog, a car, a bus. You could ask, well, what if I show it a horse? It doesn't have to predict it. It can predict whatever it wants in that situation, right? But for this case, imagine that I'm just distinguishing among those four classes. Or the crazy example I gave you where your classes are every word in, say, the English language. All right. How do you encode this? It's encoded as a one-hot vector. It's called the one-hot vector, right? And the distribution-- the error distribution is the thing that we just talked about to the k, OK? So-- such that some y equals 1-- to the k equals 1. OK? So there's a vector that's in 0, 1 but if precisely one thing is lit up. So, for example, you could have the cat one is 1000. This could be cat. You get the pattern. represent car, and so on. So those vectors, basically these one-hot vectors, they seem pretty wasteful, but you don't have to store all the zeroes. It's not as bad as it looks. But like this is how you intuitively think, mathematically think, about how the data looks, OK? clear enough? So we've reduced our problem from dealing with these categorical labels to dealing with vectors. Now let's try to classify them, all right? So let's draw a quick picture. Oops. So let's imagine our data looks like this. So there are some class ones, which I guess are cats here. There are some buses here, which are 4's. There are some dogs here. I'm just drawing them all the nice and clustered. Of course, your data never really looks like this, but that's OK. All right, so what do what do I want in this situation? I want lines. So this is corresponding to one class. As I said, this is the cat class, this is the dog class, and so on. So how does multidimensional-- how does this multiclass thing work? That's too close. The colors don't really matter, but I have started, so now I'm going to finish. All right. Car, bus. So what do I want here? What I need to do is, when I pick, because we're looking at linear separators for this, I kind of want to look at-- what I'll do is I'll pick a line that, for example, separates the cats from everything else. So this will be something. This will be theta 1 dot x is equal to 0. So this is the line I'm drawing here. So I want to pick theta, right, so that on one side are the cats, and on the other side is everything else. Does that intuitively make sense? Right. For theta 2, for theta 4, what would I like to do? Well, I'd also like to pick something where-- here. Oops. Theta 4 dot x equals 0. So I'd like it so that, again, the buses are on one side, and everything else is there. Now if you look at this geometrically, it becomes pretty clear, by the way, there's lots of choices. I could have picked here, I could have picked there, right? What we will try and prefer sometimes called max margin. We actually prefer that it's kind of as far away from the two data points as possible. It's like as close to in the middle as possible. And you can verify that actually makes sense, and you can verify that that's actually what we'll-- you'll hope will happen. You get something like this. All right? This is x rate. So in this case, by the way, which is really nice, everything is nice, and linear, or separable, right? This side has the dogs on it, this side has the cars on it and so on. So there's one line that explains-- that kind of can separate each one of them. Now the question is, how do you pick, right? So the way you do it, right, is when you get a point, you're going to compute its value against each one of them-- theta 1, theta 2, theta 3. I'm going to draw the first ones, first three-- theta 4, whatever. And these are going to give me some values, OK? And the values are going to be-- basically you can think about as, how close are they to the various lines? OK? It's a dot product. Please. Can you please remind me why you put it equal to 0? I'm just saying this is the line where, on this side, theta is going to be-- so for this side, theta is going to be positive. And on this side, it's going to be negative all over here, right? And so I'm just drawing the deciding line. So as you're more catlike, you're getting a higher score from this. You're getting something that's like a larger score. As you're here, you're getting a negative cat score. So the point is each one of these things, because of that, is going to give you-- great question-- is going to give you some score. So maybe it's a cat, so it looks like this thing, 0.1. And everybody else is like, yeah, I'm not really sure. Maybe I'm kind of like borderline about all the other classes. This is what you would hope would happen, OK? Each model gives me a score, and then-- oh, did you have a question? Yes, I had a quick question about the center. So what happens we were in the center? Yeah, yeah, great question. So right now, remember like, you give me a horse, and you put the horse in a center, who knows, right? Who knows what I'll get. I will be able to run this procedure, and it will get confused. So one of the things that happens in neural nets, by the way, is that they're-- in large scale models is, when you're really high dimensional, and you have a bunch of these lines, and other things, there will be pockets that are actually nowhere close to anything. But now, look, it's totally well-defined. This character here has a normal this way. This one will get a negative-- it will get a negative score from everybody, right? Now you can look and say, if it gets a negative score from every single thing, maybe I should be suspicious of it. But that's not, in general, something that will happen. One of these, unfortunately, will be higher than the others. Like if it's here, it may be closer to cat. And so it says, oh, a cat that was near the border, or something like that, and it will pick. But let's get to the procedure first before talking about the exceptions, OK? So we have all of these. Then what happens? We exponentiate them, right? And then this actually leads to probabilities. Actually, let's make this really negative. This is like minus 10. e to the minus 10, then these things are approximately like I get the values out, and then I normalize them by summing all of them. So I sum all these together. So I sum all these characters. Sum theta i dot x, and that gives me some value, z. And I divide this number by z, divide this number by z. And that's going to give me a number between 0 and 1 because I sum them up, and they're all positive. So 0.5, 0.17, and so on-- The point is I compute this exponential, I sum-- oops, this should be e of this thing-- and then that is my normalizing term. I sum it up. So let me write it in a cleaner form. It's the following thing. So the probability of y equals to x-- probability of y equal to k is-- given x and theta has the following form, it's x of theta k dot x over sum k sum j exp theta j dot x. OK? j goes from 1 to k. OK, I just described the procedure in elaborate detail here. But this is basically what's going on. This is exactly what's going on. So basically-- please. [INAUDIBLE] So two things. One is this makes sure that everything is-- so exactly as we were talking about before, these are each-- think about these as each like a logistic regression model. So in the logistic regression model, you take this, and get the natural parameter, then you exponentiate it. If you had an offset, like if you had some function there, you would still need, just as we did in the exponential model, you need to make it a probability distribution. So that's where the z comes up. So the exp is because we're doing these general linear models, and that gives us the nice kind of functional form that we wanted underneath the covers that we've been using in all our predictive problems for in the class or not. Think about logistic regression binary or not. And then we have many different scores, and then the procedure is just to normalize them, OK? And this kind of makes sense, right? This is saying like how strong a classification I am, right? If these cats-- LIKE if there's a cat way over here, maybe this is a super-cat. Like, the clearest picture of a cat you've ever seen. In that case, like, it should get a really high score, and you should be really confident in it. That's the intuition. Now, is that always true? No, certainly not. Picture of a horse could show up over here. We hope it doesn't. But mechanically, is this clear what happens? Right, please. Last class, I asked this to you, and you said that ultimately we will have to do one versus all classification for each of these classes. So I think that's true here. Yeah. Yeah, so-- exactly. So this is basically a compact form of what we call one-versus-all, which is like, OK, how confident is my cat detector, how confident is my dog detector, how confident is my, whatever this is, car-- I don't know what that is. Yeah, car detector. Does that make sense? And so you just bake off their relative strengths. That's how you do multiclass classification at once. All right, so how do you train this? Well, what you're given-- because you're given something that looks like this. So 1, 2, 3, put 4 in there. You're told it's a cat. You have probability 1 here, and you have 0 everywhere else. This is what you're actually given. That's what the label looks like, OK? Your probability, your p hat, your estimate will not look like that. They will actually look like, oh, I'm pretty confident it's a cat, but I have a little pieces of each one of the others-- This is that inference time. What I'm trying to get across is, when you actually look at these things. They will give you small probabilities that it's everywhere because it's doing this normalization. OK? Now, one thing that people do by the way, is something they call label smoothing. They take this, and they push a small amount of mass everywhere else, right? So you take the one, and you kind of say, like, I'm going to put a small, tiny amount of mass everywhere else. And that's basically to account for the fact that your labels are often wrong, right? Even very popular, well-studied benchmarks will have 3% of their labels be wrong or something. So you can imagine how you would kind of smooth, and why that would be bad when you're training a system if you're like, it's definitely not a cat. You're like, no, there's a small possibility it's a cat. I should admit that possibility. It's a very different statement if you imagine those two. OK, awesome. So what happens here? Well, the great thing is is, like, this follows exactly what we've been doing the whole time. So we now introduce the-- this is also sometimes the cross-entropy term, right, which is equal to this guy. But this follows basically our basic recipe, which is-- so py equals k times log of p hat of y. This equals minus log py hat of yi. So this is the ground truth label. OK, so this is in case we don't do smoothing, basically our loss, to minimize the cross-entropy, is the same as minimizing the log of our expected probability. And that thing has a very fancy name. That's a logit. And that's the-- you'll see these negative log likelihood things all over machine learning packages. That's what they're doing, OK? And this equals-- just so we're super clear-- it equals minus log of x data i dot x over sum exp theta j dot x. j goes from 1 to k, OK? And this is what we minimize. And that's it. And so how do you solve this model? Just one gradient descent. All right? That's it. All right, any questions about this? What part of that is the logit again? Oh, the logit is a log probability. Great question. So this is the terminology for a log probability. So this thing comes up-- the reason I call it that is you'll probably encounter that term. I just wanted you to be familiar with it. And you don't usually predict-- you don't actually write out the probability functions. You don't take the exp. You actually just take the log of those probabilities, and that's what you actually minimize. And so you will use them, and they're in log scale. So you'll often see machine learning code spit out these negative That's what they are. They're logits. You'll see that term, logit, everywhere. Yes, please. Why is there a negative sign? Because otherwise it would be a maximization. Yeah, it's just to make the function sign so that we have a minimization. Wonderful question. Yeah, please. So that function that you're minimizing-- Yeah. --it doesn't look like it has any dependence on the label, though. Oh, awesome question. What a wonderful question. Yeah, it's hidden right here. This theta i, this i is the ground truth i, right? That's what this statement is. This is the ground truth. Wonderful. So that's not your perimeter-- No, it's just like you picked out that one, and then you get-- your loss function kind of like perfectly encodes it. If you put in the label smoothing, then in fact, it's not just 1 theta i that's there. They each get a weighting associated with them as well. But here, that was the trick. This yi is the same as the actual ground truth. Wonderful, wonderful question. [INAUDIBLE] Yeah, great question. Yeah, so here it's the theta x, and then I'm taking the exponential. But you would also have an x1 over x thing, yeah. But here, for this model, I don't do that. So what you're saying is an alternate model where you take all of the logistic regressions, and you just say, like, what's the probability of each one, and then you compare them. And I'm saying, no, you do it in terms of the logits here, and this is how you break them off all at the same time. The difference between them is not super critical, but this is the one we use. But it's great to point that they're different. So I did kind of, by sleight of hand here, I said, oh, this is like you're doing logistic regression. But this, again, is like saying-- so if you think about logistic regression as there's a yes class and a no class. And the yes class I have, the weight is x of theta T to the x This is each saying that. This is the weight of every one of those possible worlds, and then I'm summing over all of them. And so that's why this is consistent with logistic regression. Imagine a null world that occurred here that said it's either the class And so its feature I'm just going to default to 1, because I don't care. It could be any scalar. Then that gets you exactly back to logistic regression. Does that make the connection clear? Yeah. So they really are the same. I just derive them in a slightly different way. Wonderful question. [INAUDIBLE] something we've seen before, a modification of that? Or is this a new one [INAUDIBLE]?? Yeah, so if you've seen-- so this is a new function, I guess, for us potentially. We have seen it because of the discussion we just had in another guise. It actually is-- the binary cross-entropy is the logistic function. And so this is a generalization of that. And if you've ever seen entropy or cross-entropy, which I think-- it doesn't matter if you've seen it or not, this is it. It's just a functional form that we care about. It's kind of a distance between probability distributions. Yeah. I care-- the reason I say this is not because I think there's something mystical here. So if you haven't seen entropy before, I guess this would be mysterious. And potentially even if you have seen entropy before, because entropy is a mysterious thing. But this is the loss function that you use, and it's-- the reason you use it is because it generalizes in the way we just talked about. I don't think you need to know any of that. It's fine. Please. Just to clarify, so where you have the clusters-- so the x in that is x of y, right? There's no y in this diagram, but yeah. So what does the x equal? So this-- here, x is a two-dimensional vector. So this is a little bit of a weird plot, right? x is a cat as a two-dimensional vector, because I only have two dimensions to draw on, and so I put them there. And that's why this picture looks a little bit different than maybe you're thinking about it as a function, or a curve or something. But that's the difference. So [INAUDIBLE] features with the higher dimensions, but it's-- Exactly right. So in higher dimensions, what you would expect is, rather than lines which are one-dimensional, you'd expect D minus were separating everything out-- hyperplanes. And then they would be-- live on one side or the other, and you would care about the distance, effectively, there. And so I'm drawing like the contour representation of the function, right? Yeah. Awesome question. And then there's no-- the y's are encoded, as we were just discussing, by which index I use. For image recognition, do you normally work in [INAUDIBLE] space, or some different information like PCA? Oh, awesome question, yeah. So we'll talk about PCA when I come back to join you, and why we use that. PCA is a method. So the two things PCA fix for you is if you have x's that have different scales, or have different meanings. Like if all your temperatures are between 80 and 82 degrees, but they're really significant, PCA is a way of centering and whitening your data. Meaning, like, subtracting off the mean, and standardizing, and normalizing that. And so that's a technique that is very, very commonly used in statistical analysis. And so we'll talk about that, and how to probably find that, and what its justifications for here. When you're doing things like image analysis, actually, the methods have been more toward raw features and raw pixels over the last couple of years where you-- the things that we're all excited about is to try and have no-hand coding in the pipeline. You can talk philosophically about why we're obsessed with this. But basically, the newest models just take the image raw, and they try to have as minimal what we call-- they call it inductive bias, but I won't define that term. But as minimal of information about the model to learn from them. So one weird fact that got me very excited a year or two ago was that we have one model for text, and we're using that same model, and getting nearly state-of-the-art accuracy on text, and images, and audio in a bunch of different places. And that is the thing that's really interesting, and it starts from the raw pixels. And it learns the edge detectors, and all the stuff we used to do by hand. So that's been the trend. Here we'll talk a little bit more about feature prep, because that stuff doesn't always work. And when it breaks, you want to have a library to fall back on. Wonderful question. But that will be in, like, I guess, week four or five. Awesome. Please. What if you [INAUDIBLE] Oh my God, what a great question. Yeah, what if you can't draw this pretty picture, it doesn't make sense? Yeah, that's possible. So what if your features are bad, right? What if it turns out that your features about cats were like, they sleep on couches? And you're like, well, a dog sometimes sleeps on couches. Then you couldn't possibly separate the dogs in the cats. It's a stupid example, but like your features could be so weak that they're not able to actually separate your class. You could imagine putting a lot more features in, and that's why these models that have bigger and bigger dimensions come in, to separate automatically all the different classes. But now your real question is like, OK-- well, OK, there's a fix. But I have my features, and I train, what happens? You just make misclassification, and that's actually the default. You have a small number of features, and then you fit those features, and you do the best you can. So like if a cat jumped over here-- there was a cat that was here, you just misclassify it, and you get it wrong, right? Go from here over to here, and then you be toast. So that does happen quite a bit, sometimes due to label error. And sometimes like-- there's a subfield of machine learning that is kind of obsessed with this, and my students write papers in it. There's a great benchmark called [? WILDS ?] from Percy, and Chelsea, and a couple of other folks on the faculty, [INAUDIBLE] where machine learning picks out the wrong features systematically. Like you take a bird that normally is on water, and you photograph it with a land background. And the machine learning model is like, oh, that's a land bird, not a water bird. There's a lot of that going on. So that absolutely 100% happens. Wonderful question. Awesome. Any other questions? All right. So just to recap, what did we do today? We went through this exponential family of models, and now we've hopefully tied in a bow the fact that we have this method to our madness about doing binary, and the real value-- we went real value in binary, because we had seen fitting lines. We did classification, and we tied them all up and these exponential family models. We talked about why inference and learning were basically the same in these models, and that let us generalize to a whole host of them. This is the workhorse of supervised machine learning. But the questions that you're asking now are exactly the right questions. Where do these features come from? What if the features don't fit the data? How do I get more expressive things? You're going to see things about kernels, and SVM, and neural nets in the next couple of lectures. And that will tell you, how do you pick your features, and how do you get to these more expressive models? And that will form the bulk of supervised machine learning. Then the next section, we'll come to unsupervised, and we'll forget the whys, and figure out what structure, can we get there. And that's when we'll get into questions like PCA. We'll get into these questions about what are called exponential family models, or EM models, where you have a supervised predictor, and an unsupervised thing. And this allows you to do some pretty wild stuff, like fit data from quasars and stuff. And we'll walk through all that stuff. And in fact, that we'll also have a thing on self-supervision, which is a new lecture -- just this time. I guess my student [AUDIO OUT] gave it last year, but [INAUDIBLE] is going to do it this year. Thanks so much for your time and attention. Have a wonderful rest of your week. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Neural_Networks_1_I_2022_I_Lecture_8.txt | Hello, everybody. Hi. My name is Masha. Some of you may have met me already as part of office hours and seen me post on Ed and things like that. I'm really excited to be giving the lecture today. It's going to be in a slightly different format than Tengyu's or Chris's. So feel free to give me feedback on that afterwards on Ed, or by email, whatever you like. But the topic today is actually kind of fun. So we're going to start our foray a little bit into deep learning, so neural networks. I'm assuming everyone here has heard of neural networks before. Anyone who hasn't? Yeah, that's what I thought pretty much. So it'll be fun to see how the ideas of what's happening in the state of the art actually come back to all the things we've been talking about so far, so ideas in linear regression, ideas in logistic regression. All of this will connect to how neural networks work. So before I get into the mathy notationy type of material, I wanted to start with some motivation. Who here has heard of GPT 3? Yeah, a lot of you. So it made big waves a couple of years ago. It's a huge, huge model. But it was able to do really impressive things. Like in this case here, it came up with a poem on its own, right? And the hope is that these deep learning models learn to be so expressive that they can do creative things, like creative writing. More recently, this was very, very recent, has anyone heard of DALL-E 2? A few of you, yeah. So this is very, very recent. And so DALL-E 2 is able to generate images. And the prompt here is an astronaut riding a horse as a pencil drawing. And you can see the main one that I picked out and then a bunch of different versions of what else it can come up with. So deep learning is very, very powerful, for better or worse. But it also has so much potential to do such amazing things, right? Obviously, this is cool. But there's also applications in medicine, applications in education, applications in things like autonomous driving, which could hopefully make our roads safer. So there's lots of potential here. And that's sort of the backdrop to where we're going to start. We're going to talk about the origin story of all these things. So we're going to start with supervised learning with non-linear models. So far most of what we talked about has been linear models. And now we're getting into the non-linear territory. And I'll talk a little bit about what that means. And then we'll get started with neural networks. We'll figure out what the heck they are, how we can define them. And next lecture, we'll actually talk about how you can optimize them. And I believe Tengyu will be giving that lecture. Any questions before I get started? Cool. All right, so first, let's think about linear regression. So we've seen this a bunch. And so we can get started with a data set that we might have, right? So we might have some xi's and yi's, so some inputs and outputs in our data set. And say our data set is of size n, right? And we know for linear regression that we have a prediction that we can make according to something like this. So we have a linear function that we're using to predict. And this function depends on our inputs x. Can anyone help me out with what our cost function might look like in terms of y's and h's for linear regression? [INAUDIBLE] OK, h minus y. [INAUDIBLE] OK. [INAUDIBLE] I heard somewhere. The sum over i. Sum over i, yeah. This looks good to me. This look good to everyone, familiar? Hopefully we've seen this all before. Awesome. OK, so we have a prediction here. And we have our label. And what we can do is we can also write this directly in terms of our parameters. So our parameters here are theta and b. Cool. I just plugged it in. Makes sense so far, right? And what we can do is run things like gradient descent or stochastic gradient descent in order to optimize this. Cool? So last lecture, you talked about a slightly different set of models. You talked about kernel models. And so with kernel models we still have a similar setup. So we have our xi's, our yi's. And then what does our h theta of x look like? So what does our prediction look like with kernel models? Does anyone remember from last class? Hint, it's very similar to what we had before. [INAUDIBLE] Yeah. And what's phi of x? [INAUDIBLE] Sorry? The feature map. The feature map, exactly. So what's interesting about this setup is we're still linear in parameters, right? But we're non-linear in the inputs. Because phi of x can be any non-linear function that you discussed last time. So what if we want to be non-linear in both the parameters and in the inputs? So generally speaking, what if we want to do something like this? Say our h theta of x is anything non-linear. So let's say theta 1 cubed x2 plus maybe a square root in there theta the whole thing. So this type of model could be a lot more expressive potentially. But we also want to think about how can we make this computationally tractable? How can we make this useful? OK, so we have some non-linear model. And by the way, all these notes will be up online afterwards. So if you don't finish writing something, please don't worry about it. It will be up. And if you want to follow along, I should have mentioned this earlier, but there is a template up as well. So it's going to be what I'm writing on, so the blank version. Cool. So let's go back to our non-linear model now. We can assume that our xi's are in rd, so some vector of features, or inputs. And our yi is some scalar, so just an r. This is pretty standard. We've been looking at this type of formulation a whole bunch. And then our h theta is a mapping from our d, which is our inputs, to r, which is the dimensionality of our outputs. So the cost function we're going to think about here for our now non-linear model is going to look very familiar. So for one example, i, we're going to have ji of theta is equal to the square of the difference between the class label and the prediction. So yi minus h theta of xi and all of that squared. So that's the cost for one example. If we want to get the cost for the whole entire data set, we're going to average. So what this is going to look like is we're going to have j of theta 1 over n. n is the size of our data set from i equals 1 to n. And then here we have ji of theta. So this is for entire data set. OK. So this constant is a little different from before. This is a common convention in deep learning. This is called the mean squared error. So it's usually the average. The constant really doesn't matter. Your optimizer is going to be the same regardless of what constant you're going to have out front of that sum. Does that make sense? Cool. All right, so-- now we're going to talk about how do we-- oh yeah, question? If the constant doesn't matter, why is it there? Like we're typically not supposed to make n equal to 1, right? So n is the size of your data set. So this is the mean squared error. So you're averaging over all the squared errors in your data set. So the reason the constant doesn't matter is when you take the gradient, your x that's going to result in your minimum is going to be the same regardless of whether it's 1 over n or 1 over 2. You asked why do we use it at all? Sometimes it's helpful for scaling. But other than that, there's no real magic behind this. It's just a convenient thing to do. It makes sense to average over your errors. Yeah, it's a nice way to scale things. Thank you. No problem. Cool. So what we want to do once we have this cost function is we want to minimize it, right? One way to do that, we can use gradient descent. And so this notation here, we're assigning this kind of like coding notation you can think of, we're assigning the right side to the left side. Can anyone tell me why this is gradient descent, what is written here, and not stochastic gradient descent? What makes it gradient descent? Yeah. Reason the whole data set. Yeah, exactly. So you're considering the whole data set here. And the reason for that is if we write out what's actually going on here, our j of theta is-- so that 1 over n is here. And we have i equals That's theta, right? So we're reasoning over the whole data set. So each update here considers the entire data set that we have. Cool? All right, so here's stochastic gradient descent. So the idea here is we're considering now only one single example every time we update theta. So we have some hyperparameter alpha, alpha same as for gradient descent some number greater than 0. And we're basically initializing our parameters theta randomly at the start. And then we go through some number of iterations. We sample some example from our data set. And we do this with replacement. And we continue on until either we converge to something we're happy with or we reach our end maximum number of iterations. So this is width replacement. I'm going to briefly sketch out what SGD usually looks like more commonly in deep learning settings, just so you guys have an idea. So this is one variant. Here is going to be another one. And so the variant that's in algorithm one, that's going to be in your notes. This other one is not. But I just think it's helpful to know some of the terminology when you go into the fray of deep learning as well. OK, so we go through a for loop, where let's say, we're indexing at k. We have 1 to n epoch. So epoch is basically a term to mean you've gone through your entire data set. So in deep learning you'll often see, or if you read the deep learning papers, you'll see oh, we trained for blah number of epochs. That's what that means. That's how many times the optimization has basically gone through the data set. So for k equals 1 to n epoch we can shuffle the data. And then for j from we might not have enough time or desire to go through the entire data set. So maybe we decide that we want to go through 500 examples out of the data set and call that an epoch. That's also fine. So we have 4j equals 1 to n iter. We're doing the same type of update. And here we have no replacement in this inner for loop with the j index because we don't want to look at the same example twice in the data set. Does this make sense? Yep. Sorry, reall quick, [INAUDIBLE] Yeah, pretty much, pretty much, and slightly different terminology. Cool? Yeah. [INAUDIBLE] So you basically like having this number, by n epoch you mean n times 1 epoch or one epoch in itself? If the data set has n samples, are you going to go to 1 to n? Or are you going from theta The latter. So the question is, what does an epoch mean? And basically, it's if you go through the entire data set, it's how many times you go through the entire data set. Any other questions here? Cool. I'll talk about the last version of gradient descent for today. And this is mini batch gradient descent. So with mini batch gradient descent, the main difference is you're considering b, or a batch number of gradients, at a time. The reason we want to do this is with things like GPUs and parallelism, this actually speeds up computation. So we can compute b gradients at the same time or simultaneously, as opposed to doing them sequentially. And that can be quite a bit faster. So let me write out some of that. So we're computing b gradients. And they look like maybe grad j j1 of theta all the way to grad jb of theta. And we do this simultaneously. So one question you might be thinking about, how do you choose b? And very often, you choose b empirically. So you test things out. You look at your validation set, things like that. And you'll talk about evaluation a bit later as well. But one way to choose b is choose the maximum b that your GPU memory can handle. And that's sort of a good way to speed up what you're doing. But the trade off is usually-- and this might come up in your homework as well-- usually the lower the b, the better the performance of the algorithm. And I'm not going to talk too much about why this is the case. This is also active research, but just to give you some idea of how folks go about choosing these numbers. Any questions about mini batch gradient descent? Yeah? My guess is exactly the same as model gradient descent, except that you're just doing this entire thing over a batch as opposed to the whole data set, right? Yes, yes, precisely. Any other questions? Yeah. I have a question about [INAUDIBLE].. Uh-huh. In this slide. Yeah. What does that exactly [INAUDIBLE].. Yeah, so with replacement means say we picked example two. We can pick example two again in the future. Whereas without replacement in the corner case, it means if we picked example two within that j for loop, we will not pick example two again until we get to the next epoch. So an example one is going to have [INAUDIBLE] You're just randomly choosing one example. It can happen to be the same example. It cannot. Does that make sense? [INAUDIBLE] the entire data set? Or do you just randomly-- In the middle one, you randomly pick examples until you reach an iter. Whereas in the left one, you're more trying to go through your entire data set and then do that again. Any other questions on anything gradient descent related? Yeah. So on the mini batch case, we actually do not-- there is no variant was replaced. So with the mini batch, no, because you're considering, say, your batch size is You don't want multiple examples in that to be the same. You want all the examples to be different. But-- Mm-hmm. Between batches do we replace these elements back to the stack? Between batches you can. You can, yeah. Sometimes you don't. Sometimes you do. It's sort of like a design choice. But you can. In this case, when it's saying without replacement, it means that within one batch you don't have doubles. Thank you. Yeah, no worries. Other questions? OK. All right. So next we're actually going to define some neural networks. So we talked a little bit about how we optimize these things. But we didn't really get into, for example, well, how do we define the neural network yet. Or how do we compute the gradients? Like when we talk about multiple gradients, how do we actually compute these things? And we need to be able to do this in a way that's computationally efficient. And progress in machine learning really took off in the last 10, 20 years because of advancements in hardware because we're able to parallelize on GPUs and make these things so much faster, and also, algorithmic developments as well, of course. Cool. So we're going to talk about neural networks today and how we define them. The back propagation, which is how do we compute these gradients, that will be covered next lecture. So this example came up at the very start, I think maybe first lecture or something like that. But the example here, we're looking to predict housing prices. And we looked at how we can do this with linear regression, with linear models. And say with our linear model, given the size of a house, we have some data, where we have some size and prices. And we can plot them on this plot. And our model maybe looks something like this with something like linear regression. What's a problem with this? Why might this not work so well? Any thoughts? Yeah. Something non-linear predicts it better. Yep, yep, yep. That's a good first one. So prediction might have a non-linear relationship with the input, right? And we only have a linear [INAUDIBLE].. So maybe that model doesn't capture the relationship that well. What's another reason this is not great? [INAUDIBLE] Sorry, repeat that. Prices can't be negative. Yes. [INAUDIBLE] Yes, exactly. The second issue with this is that prices can't be negative. And when we have a linear model like we do, they can, right? Nothing's preventing them from being negative. Can anyone think of like the best simplest thing you can do to fix this problem? Yeah. You can fix the intercept to be 0. Sorry, say that louder. You can fix the intercept to be 0. Yes, you can fix the intercept to be 0. So what does that mean for negative numbers? So fixing the intercept to be 0 would be this, right? Yeah? You have whatever house was negative price will have negative size, which also doesn't make sense. Yeah, so one thing we can do-- let me write out these issues first. But I will show you a solution in just a second. Does anyone a function that can fix this for us? [INAUDIBLE] Sorry, say that louder. [INAUDIBLE] OK, any other ones? [INAUDIBLE] Yeah, ReLUs. Has anyone heard of ReLUs? One person, a few. OK, so we're going to talk about ReLUs in just a second. So the issues that we talked about here are the prediction might have a non-linear relationship with the input. And the second issue is we can have negative prices. OK, so what we want our ReLU to look like, or what we want this function to look like is something like this. So basically, for all things that are in the negative side, we map to 0. So that's what our ReLU is going to be. And what that looks like in math terms, we have our regular prediction. But we want the maximum between what linear regression would output as a prediction and 0. And then our parameters here are going to be w and b. So this is ReLU. This is how it's usually written. And this is the notation that we will use. So you can say that ReLU is a function of t here. And then really we can just write our prediction as this ReLU function, which is wx plus b. And does anyone know what ReLU is, what it's called in deep learning terms? Yes. What's a [INAUDIBLE] Yes. So what category of functions is it in deep learning? Yeah. Activation function. Exactly, activation function. So our ReLU is an activation function. Can anyone tell me is ReLU linear? [INAUDIBLE] OK, raise hands if you think it's linear. OK. Non-linear? Yeah, yeah. It is definitely non-linear. The maximum creates our nonlinearity there. OK, so this is our nonlinearity in deep learning, or often is. It's also sometimes called one neuron. OK. So going back to our housing price prediction set up-- yeah, question? I have a question about our activation function. Are they all supposed to be non-linear? Yes, activation functions are by definition non-linear. We'll talk at the end a little bit about what happens if they are linear. Other questions? Cool. All right, so let's set up our high-dimensional input example. So far especially in this plot, we have been looking at one input, one output, so scalar to scalar. So what if we have more features, so high-dimensional input? And this is the case when we have x be in rd and y be still scalar. So we're still predicting housing prices, for example. So our new terminology for our prediction is ReLU of wtx plus b. And so our x is just going to be stacked features or inputs. So we have x1 all the way to xd. And this is in rd. And then our weights, our weight vector, w is going to be in what dimension? Can anyone tell me, based off of how I've written it? [INAUDIBLE] d, that's right because we're making a dot product with x. And then our b is called our bias. And it is going to be scalar. What we want to do in deep learning is we want to stack these neurons. So output of activation functions is going to be the input to the next one. And this is what creates that expressivity of deep learning models. Basically they have a bunch of nonlinearities that are stacked one on top of the other. And this becomes a super flexible framework that can represent a lot of different domains. So now let's make the housing price prediction problem a little bit more concrete. So let's say our x is going to be in R4. And say besides size, or square footage, which is x1, we're also going to consider things like number of bedrooms in the house. What's another thing? Maybe we can consider the zip code that the house is in. Maybe it's close to a subway or something, right? And the last thing we'll consider is maybe something like wealth of the neighborhood. So these are our features or inputs. And what we might want to do is compute some intermediate variables. So what I mean here is maybe there are some things that combine some of these ideas and help us make a prediction for what the price of the house might be. So one example is maybe the maximum family size that a particular house can accommodate. So what would the maximum family size potentially depend on, out of the four inputs that we have? Size and number of bedrooms. Size the number of bedrooms. Yeah, I would agree with that. So we can also think of other variables, like maybe how walkable the neighborhood is. And that might depend on the zip code, for example. And the last one is maybe school quality in the neighborhood that this house is in. And this will be our a3. So a1 through a3 are some intermediate variables that we think might be helpful to make predictions about housing prices. Well, so how could we maybe write these out in terms of math notation? Well, let's use our ReLUs that we just found out about and do ReLU of some linear combination of the features that we think might make sense in this context. So maximum family size, maybe we have combo, like someone said, of the size, which is x1. And then maybe we add the number of bedrooms, which is x2. And we have some bias term, which is going to be theta 3 here. So theta 1, theta 2, theta And then we can do the same thing for the rest of these intermediate variables. So we can say theta 4. So walkability depends on x3, which is code. And then we have another bias term. And then finally a3. So we have ReLU of theta 6. So a3 is walkability. Maybe that depends on-- or sorry, a3 is school quality. So that depends maybe on zip code, which is x3, and maybe on wealth of the neighborhood, which is x4. And then we have some bias here as well. So these are the intermediate variables that we think might be helpful here, OK? And the last thing that we might want to do here is-- sorry, let me just change this notation so it's not confusing for the notes later. These are all w's. And the biases are going to be b's. These are b2's, b1's. Oh, sorry. No, I actually had it right the first time. Ignore me. So these were all theta parameters. At the end of the day, it doesn't matter. I just want to make sure that it matches your notes so it's not confusing later. So 1, 2, 3, theta 4, theta 5. So this is actually one layer that we defined here. OK? And finally, once we have these intermediate variables, we're actually going to make the output. We're going to construct the output. And our output is our h theta of x, which is going to be, if we follow this construction, we're going to write ReLU of. And here we're going to make combinations, a linear combination, of these intermediate variables a. So we're going to have theta 9 a1 plus theta 10 a2 plus theta 11 a3 plus theta 12. OK, one thing is this is going to be our end goal or end prediction here. One thing that usually happens in deep learning is we actually for the output we don't use a ReLU. So it's sort of like convention. Nothing is necessarily stopping you from doing that. It's just by convention, usually we just have a linear layer at the end. Cool. So now that we look at this diagram that's here on your right, we have all the things that I talked about, right? We have the size, the number of bedrooms, the zip code, the wealth. And it's going into these intermediate variables. So this is a1, a2, a3. OK? And the weights that we're considering that are relating sort of taking from the first set of inputs x to a. So for example, here we have theta 1. Here we're going to have theta 2, and so on. Does this make sense? And the structure that we-- yeah, sorry, a question? I had a question about the dimensionality of the weights. Right now they're all scalar. Everything we're talking about right now is scalar. So whenever we have the subscript, it's usually scalar. OK. Yep? Here are you having a few intermediate towards the end. Are you just using those intermediate [INAUDIBLE] Yes. It's possible that some of those original variables might still be useful. Yes. So is there any way you can transport them over towards the end or something? Yeah, I mean, if you look at a2, for example, if you set the-- say x3 is positive. And you set theta 4 to 1 and theta 5 to 0, then you transfer over all the information from x3. But this structure, we came up with the structure based off of our knowledge, right? This was not something that was determined by some algorithm. We just came up with it because it seemed to make sense. So this is called prior knowledge and infusing your model with it basically. But what if we want to be a little more general? What if we don't want or don't have the prior knowledge to do this in a way that results in good performance? So this is getting into something called fully-connected neural network. And what this means is we no longer think about these ideas of family size, walkability, school quality. We don't know what those intermediate variables might be. But maybe they depend on all of the inputs. So each intermediate variable will depend on every single input. So this is going to get messy. But I'll try to use different colors. So every single variable here will depend on all the ones that came previously. And this is a much more general way of thinking about this, r ight? We don't have to infuse this prior knowledge into the system. We can let the neural network figure it out. So what this looks like mathematically is maybe we would have a1 b equal ReLU of some weight x1 plus some weight x2 plus some weight x3 plus another weight x4 and plus the bias term. And we can do this for a2 and so on and so forth. Does that make sense? Cool. So what this looks like if we start looking at vector notation now more, this might look like this. So we have a1 is equal to the ReLU of w1. This higher index notation in square brackets is going to refer to layers. So this would be the first layer. And this would be the second layer in this network. So we have w1 layer 1 transposed with all the inputs x. Now this is a vector. So now we're doing a dot product. And we add some bias term. Can anyone tell me what dimensionality w1 in the first layer is? Yeah. Four. Yes, that's right. And that's because our x is dimensionality 4 and our bias is still scalar. And then we can do the same thing. We can say this still first layer-- and the last one same thing. And then finally, we have our prediction. And the prediction now is going to use weights from the second layer. And it's going to operate on those intermediate variables a. So we're going to have w2. And here we have b2. And now w2 is going to be of what dimension? Yeah, I heard it somewhere. Yes, because a is of dimension 3 here. And the bias still scalar. So this is a two-layer neural network. And this is the same thing as saying we have one hidden layer. So intermediate variables are referred to in deep learning as hidden units. And the associated layers are hidden layers. Any questions on that? Yeah. Yes. [INAUDIBLE] Great question. Great question. So you're going to be limited by compute. But aside from that, it's a lot of experimentation. So for example, GPT 3 has a lot of layers. But it's also dealing with a lot of data. If you have just a little bit of data and just a few examples, you probably don't want to use a very big model. And I think you'll talk a little bit about why shortly. Any other questions? Yeah. What's the difference between all the different hidden units if they're all relying on the same input? So the hope is that the network learns different representations. But technically nothing is really stopping it from exactly what you said, just replicating ideas. But it often doesn't. And it's trying to learn these a's in the best way that can help the neural network make a prediction h theta of x, right? So this ends up working actually quite well. Yeah. In the prior example, the [INAUDIBLE] example, you had some [INAUDIBLE]. Yeah, so that's actually an active research area as well. It's called interpretability in deep learning. And either figuring out if there is any or figuring out how we can induce there to be some level of prior knowledge, so some interpretability. I think there's a question at the back. Yeah. So to double check, w is the weight you're going to be applying [INAUDIBLE].. That you're going to be applying to what? The x. Yes, so w are the weights that you're going to be applying to the x's. So those are w1 with the square bracket at the top. And then w2 with a square bracket at the top is going to be the weights you're applying to the a's. OK, thanks. Yep. All right, OK. So this is some notation. Yes. Yes, question? I have a question which is more generally, I kind of get the sense that if we get rid of the ReLUs everything is just linear. So what's the point of the neural network versus just a regular linear regression? Yeah, so that's a great question. You're completely right. If you get rid of the ReLUs things are just going to be linear. The ReLUs are what make the neural network be more expressive. So those nonlinearities, those activation functions, are really the heart of the neural network. I'm going to move on in the interest of time. So we have a two-layer neural network. And we just talked about all these things. The only difference here is that we're changing notation a little bit. So we're introducing the z, which is just a linear combination of x's with our weights and our biases. And then we still have our a's. The number of a's here is m. So this is the number of hidden units that we're considering. So we had three in the last example. More generally we'll have m. Otherwise, this is exactly what we just wrote out before. Does this make sense to everyone? Any questions on this? OK, so we're going to talk about vectorization. We're going to make this even more vectorized. So we had a lot of notation and a lot of indices here. We want to get rid of as much of that as possible. One, to make things cleaner and not have to write all these indices everywhere. And two, which is the more important reason, vectorization actually makes things faster. It makes things better able to be parallelized on, for example, GPUs. So what we can do first is think about those weights in our first layer, right, those w's with the square bracket 1. And what we can do is we can transpose them so they are rows and stack them in a matrix. And so what we're looking at here is m by d. So m is our hidden unit dimension. And d is our input dimension. So these are all the weights in our first layer. And then what we can do is we can actually write out those equations from before in this vectorized form, where, for example, we can write a z1. If we take the first row of this, z1 will be w1 of 1 transpose x1 plus b1 1, which is that first row that we had before. But this looks way, way nicer. And this is our vectorized notation. Any questions on this? So it's the exact same thing, just more compactly written. So now that we have these z's, this is actually called pre-activation. So this is before we applied our ReLUs. And we have these z's. This is capital W. I'll do it with those little bars at the top. And it's all our weights from our first layer. We take the x. And we add our biases, which are also stacked together. And all of this is going to be of dimension m, which is the same dimension as our hidden units. Now, we want to get a's out of this. And to do so we need to apply the ReLU to every element in the z vector. So our a's are a1 through am. And we want to apply ReLU to each one of these z's. We're actually going to abuse notation here a little bit. And we're just going to write this by definition to be ReLU of z. So it's an element-wise operation. And we're going to refer to it the same way as we would for a scalar. OK? And then what we can do is we can write our second layer weights. In the second layer we only have a scalar output that we want. So we only have one weight vector to include here that we're going to transpose to be a row. And this is going to be in dimension 1 by m. Any questions on that part? OK. And we're still going to have our bias in the second layer, which is still scalar. And so our final output here is going to be this w2 matrix times a dot product with a and the bias term. And what I said before is that vectorization helps us paralyze things on GPUs. OK. So we talked about two-layer neural networks. What if we have more layers? So notation just follows, right? So before we stopped with one hidden layer, which is a two-layer neural network. And now we're going to have r minus 1 hidden layers, which is an r-layered neural network. And like I mentioned before, all the hidden units will have ReLUs. Whereas the hidden layers, whereas the prediction just by convention will not. And we'll refer to these big W's as weight matrices. And these bias terms will just be called bias. Nothing really changed. And these a's will be hidden units. One thing to note is the dimensionality of these a's. So what is the dimensionality of ak, so a layer k. We're going to refer to that as mk. And so can anyone tell me what the dimensionality is of w layer one? [INAUDIBLE] Sorry, say that louder. d cross k? d cross k? What do folks think? I think it's the other one. So it's-- We're going to divide by x-- k cross d? Is that looking-- yeah. I think it's mk. Yes. m1. Yes, yes. There we go. So it's m1 cross d. So d is our input. So we want this matrix to do matrix multiplication with x. x is of dimension d. So we want that last dimension to be d. The first dimension is the dimension we want as the output. The output will be a1. a1 is of dimension m1. Does that make sense? So just for practice, what would w2 be? I have a question. Yeah. Would this be r layer network or r minus 1 layer network? This would be an r layer network with r minus 1 hidden units. What is the dimensionality of w2? Can anyone help me? Yeah. m2 cross m1. Yes, m2 cross m1 because the input will be a1. That's what's being multiplied with w2. And the output will be a2. So we want m2 output. OK, cool. And so more generally we can write this for wk. We're going to have our mk cross mk minus 1. And bk is just going to be mk dimension. Any questions here? No? OK, awesome. So we got this question earlier. Why do we need an activation function ReLU? Can anyone remind me why do we need it? [INAUDIBLE] Yeah, so it's our nonlinearity. It's what makes what we're doing here non-linear. But for fun, let's see what happens if we don't have the ReLU at all or we have it be just identity. So this is a 1, sorry. So we have our hidden layer a1. And then say we only have one hidden layer. So our output here is going to be w2 of a plus b2. And then if we actually substitute in for a1, what are we going to get? So we're going to have w2 of w1x plus b-- oh, sorry, that was a-- plus b2. And then we can actually expand it out. And we're going to get-- watch my math so I don't mess this up-- going to get w2w1x plus w2b1 plus b2. So what does this really look like? Well, we can define these to be-- say this as w tilde. And say this is b tilde, right? The second term doesn't depend on x at all. And the first term does. So this is just a linear function of x as we would have in linear regression. Essentially everything would collapse. And we're here linear in these parameters. Any questions about this? So if we lose the ReLU, we lose the nonlinear expressivity of the neural network. Yeah. [INAUDIBLE] Yeah, you can definitely have other ones. So in the notes I think we mentioned the sigmoid activation function and the tan h activation function. I would probably say ReLU is the most common. But it really depends on your application, the kinds of outputs you might want, things like that. There are definitely other ones. And there's also besides just the kinds of outputs that you might want, these activation functions have certain properties when you try to compute gradients with them and things like that. And that's a little bit beyond the scope of this lecture. But if you have questions about this later, come by. And I'll be happy to chat. Any other questions on this? Yeah. On that note, on the-- if you look at ReLUs, perhaps the closest to what we can call as a linear activation function. Yes. Why is it so commonly used despite it being quite close to it. Quite close to it. It works well. Yeah. [INAUDIBLE] in the parameters when it's not [INAUDIBLE] Why is it not linear [INAUDIBLE] Oh, yeah, good point. Yes, you're right. You're right. I meant it in the sense that the parameters are going to be linear here. Yeah. Other questions? One last new-ish idea I want to talk about. And that is connection between neural networks and kernel methods. So kernel methods, we talked about this very briefly at the beginning of this lecture. So the kernel method output or prediction is going to look like h theta of x beta transpose phi of x. So we're linear in parameters here, but not in x. Yeah? Everyone agrees? So looking at that penultimate layer ar minus 1, what if we write it as phi b of x, where-- phi beta-- where our beta is going to be all our parameters. So our parameter is w1 all the way to the penultimate layer wr minus 1. And also, our biases, which are And then we can write the prediction from our neural network as w of r, So our last layer, matrix multiplied with phi beta of x, if we fix our parameters. So we're fixing beta here. And we can add our bias term. So really this looks pretty much the same, right? The only difference is within the neural network, phi beta is actually learned. So the algorithm is looking for the best possible features for this data. Whereas in the kernel methods we choose the kernel. So there's more of that prior knowledge and prior structure that we're infusing into the algorithm. Whereas here, there's flexibility in that these phi beta parameters can actually be learned to best fit the data. And because of this sort of structural similarity, the penultimate layer output ar minus 1 is sometimes called the features or the representation within a neural network. And so if you ever hear terms like representation learning or something like that, it's talking about those hidden layers within the neural network and what we're learning there. Any questions here? Yeah, at the back. Just to check, that line where you say h theta of x equals wr phi beta, that's all supposed to be one line, right? Yes. So phi, whatever that Greek letter is called, is the only subscript? So it's wr matrix multiplied with phi underscore beta of x plus all of that plus the bias term br. Is the plus the bias term [INAUDIBLE] part of the-- it's not a subscript at all? No. OK, thank you. Yeah, sorry. Other questions? No? All right, so today we talked about two different things. We talked about supervised learning with nonlinear models and what that might look like, what the cost function looks like. And the second thing we talked about are neural networks. How do we construct them? We first started with two-layer neural networks and then expanded that notation to be r-layer neural networks. And next time, Tengyu is going to talk about back propagation. So how do we actually optimize? How do we perform stochastic gradient descent or any kind of gradient descent in this framework? So how do we compute the gradients for the cost function and for the neural network? Any last questions? OK. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Neural_Networks_2_backprop_I_2022_I_Lecture_9.txt | So I guess the last time, Masha talked about deep learning, introduced deep learning neural networks. Today, we're going to talk about back propagation, which is probably the most important thing in deep learning, like how do you complete a gradient and implement this algorithm? Of course, there are many other kind of decisions you have to make in deep learning. And also, there are this back propagation. This computing gradient becomes standardized these days. You don't necessarily have to implement your own gradient computation process, where you can just order-- The gradient is done by the so-called auto differentiation algorithm. But this is actually the only algorithmic part in deep learning. So that's why we still teach it. And also, in some sense, it's still important because this idea of computing gradient automatically actually has many implications in other areas. For example, suppose you want to study the so-called meta-learning. I'm not sure whether you heard of this word, which is basically built upon this idea that you can do all the differentiation for almost any computation you want. And these same ideas also shows up in other cases. And also, I know that some of the previous courses, where it also covered the back propagation. I think CS221 does cover back propagation in some sense. So what I'm going to do is that, so in the past, in the last four years when I teach this, I have a lot of derivations with all of these indices. And you computed the gradient of-- you do the chain rule in a very detailed way. And this year, I'm going to try a slightly different approach, where I'm going to make this a little bit more packaged in some sense, divide it into submodules. And in some sense, it's a little more abstract. In some other senses, it's cleaner because you don't have to deal with all the small indices, like all the indices. Sometimes, there are five indices you have to keep track. And now everything is in vector form. And this is not entirely new. It's not like I'm just doing an experiment because, in the last time, I did both of the two versions. I have the very fine-grain derivations. And I also have the vectorized form. And I think I got some feedback from you that the fine-grained derivation is not that useful because either you feel like it's messy, and I'm doing all of this computation on the board, and you know how to do it, and it's very messy or, if you haven't seen it before, it's hard to follow. So if you haven't seen all of them, if you haven't seen back propagation I taught even for a very, very simple case and you want to do the very low-level stuff, probably take a look at lecture notes afterwards. It's independent. What I teach here doesn't depend on anything. It doesn't require any background. But it's more on a little more abstract level to some extent. I would say it's more on the vectorized level. So everything is treated as vectors. So if you want to do the more low-level details, you can take a look at the lecture notes, which I think that those part is a little easier and mostly covered by some of the other courses as well. So that's why I'm making this decision. Don't worry if you don't see any-- if this, my comments, doesn't make sense here, just if you haven't seen any back propagation. So I hope, at least, this is still-- the materials I'm going to cover today still is completely fine if you don't have-- if you haven't seen anything about back propagation. So let's get into the more details. So I guess, so basically, the so-called back propagation, this is just a word that describes-- it's a terminology in some sense, back prop, that is a technique for us to compute a gradient of the loss function for neural network. So recall that last time, I think we talked about this. You have a loss function. For example, you have a loss function, J of theta, which is-- maybe let's just look at one example, J subscript little j. This is the loss function of Jth example. And this is something like yj minus h theta xj squared. So this loss function can be other loss function. Here, I'm just only using the square loss for simplicity. And this is the neural network. And recall that we talk about SGD, stochastic gradient descent. And when you deal with neural networks, you always have to compute this gradient. So the algorithm is something like this, minus alpha times J. The gradient of the loss function on some particular example, jk, may be a random example at the parameter theta. So basically, today, what we are going to do is that-- so basically, today, how to compute this, the gradient. So this is the main thing today. And once you know how to compute this gradient, you can implement the algorithm. And there is a generic way to compute the gradient. So what I'm going to do is then I'm going to start with a very generic theorem. And the theorem, I'm not going to prove it. You don't have to know how to prove it. But I think it's good to know the existence of the theorem. So the theorem is saying that-- I'm going to write down the statement . But the theorem is basically saying that if this loss function, J-- by the way, I'm going to focus on only one particular example today because, if you know how to compute a gradient for one example, then you know how to compute a gradient for all the examples. They're all the same. So the point-- so on the theorem I'm going to write down today is that if you know how to compute the loss itself in an efficient way, then in almost all situations, you know how to compute the gradient of the loss. And in almost same amount of time. It's very-- in some sense, if you heard of it the first time, it's a little bit striking. So let me write down the theorem. So I need to first specify some of the minor details, which don't matter that much. So anyway, this theorem is going to be stated somewhat informally. So let me define this so-called notion of differentiable circuits or differentiable networks. And let's say this differentiable circuits or differentiable networks are compositions of sequence of, let's say, arithmetic. And arithmetic. am I spelling it correctly? Sorry. Operations and elementary functions. By the way, my definition here is a little bit hand wavey, as you can see. But the point is you will see the point. The point is not exactly about the details. It's more about the form of the theorem. So by arithmetic operations, elementary functions, I mean that, for example, things like maybe addition, subtraction, product, division. And some of the elementary functions, you can handle. Actually, most of the elementary functions, you can handle. So maybe cosine, sine, so and so forth, exponential log, log, logarithmic, maybe ReLU maybe sigmoid. There are many, many of these functions. And suppose you have-- this is my definition of differentiable circuits or differentiable neural networks, differentiable networks. And you can see a neural network is often one of this because, when you have a neural network, you have a lot of matrix multiplications. And well, no matter how many matrix multiplication you have, it's really just some complex compositions of all of these operations. So in some sense, everything you compute are some combinations of these things, no matter if it's by neural network or something else. But I'm going to insist that they are differentiable because I'm going to have-- I'm going to differentiate the output of this network. So here is my theorem. So maybe I should do-- I think a blue color is a little better? Is that right? Yeah, and this is informally stated. It's not very far from the formal version. I guess just the formal version requires some minor details about the differentiablilty, so on and so forth. So the claim is that suppose you have a differentiable circuit of, say, size N. N means how many basic operations you are using. So this is my size. The size means the number of basic operations. And suppose this circuit computes, computes of a real value of the function. So I'm going to stress this. This is a real value of function f that maps, let's say, l dimension to one dimension. I'm going to stress that this theorem only works when you have one output. So the function can only have one output. But they can have multiple inputs. So then, suppose you have such a circuit. Then the gradient of this function that you are computing, the gradient, so what is the gradient? The gradient at any point is a vector because you have io out-- inputs. The gradient is l dimensional. So the gradient is l dimensional vector. The gradient at this particular point, let's say x. x is just the abstract point. And this can be computed in time of N. I think. I guess, technically, this is O of N plus D. But I guess-- I'm not sure why on my notes. This is a typo, I think. Oh, I see. So I guess time of N by our circuit of size O of N. And here, I have an implicit assumption because implicitly assuming N is bigger than d because if N is not less-- it's less than d, it's a little bit-- sorry, N is bigger than l, because if you have a circuit, the circuit, if it-- most of the circuits should read like all the inputs. So you probably need at least N-- l time to read all the inputs. So that's why I'm assuming N is bigger than l. If N is not bigger than l, then this has to be slightly changed. But the message doesn't change. So anyway, what's the main message? The main message is that if you can compute a function f by a circuit of size N, then its gradient can also be computed in a similar amount of time. And you just pay a constant vector more. And I think this constant is literally-- in most of cases, this constant is just 1. Of course, when you really talk about the absolute constant, it has to depend on how-- some of the small details like, how do you really implement this in the computer? So that's why I'm hiding the concept. But in some sense, you can view these concepts, even 1. So I guess-- sorry, I think the-- one of are the concepts I think, also depends on how you count, right? So whether you assume you know you are to compute a function f. So maybe, so basically, the constant would be 2 if you have to compute f and the gradient of f. So basically, there is a-- so if you know the function f already, then this constant would be 1. If you don't know the function f, then you have to first evaluate f and then do the gradient. And then, this constant will be 2. I'll discuss this a little bit later as well. But anyway, for the moment, you can just think of this-- you almost have the same amount of time. You can only use the same amount of time to compute a gradient. So gradient is never much difficult, more difficult, than computing loss itself. And this is very general because it doesn't have to be a neural network. It could be almost anything like this. And if you want to instantiate this theorem for neural networks of the laws of networks, then f in this theorem would corresponds to the loss function on a particular example, J, Jth. example. And theta is the variable x here. And the gradient of f corresponds to the gradient of the loss here. And what is l? So l is the number of inputs to this function f. So here, what is the input? What's the variable I care about? The variable I care about is the parameter theta. So l is going to-- equals to, corresponds to, the number of parameters. And N, so what is the time to compute this loss function? The time to compute a loss function is the same as the number of parameters. So if you think about you have a neural network of a million parameters and how much time you have to use to compute the final loss function, where you basically have to give your input to the neural network, and go through the entire neural network, and do all of these operations. And basically, eventually, the amount of time you have to spend is also similar to the number of parameters for evaluating a loss. N is the time to evaluate the loss. So that means that the time for the gradient, so computing gradient, also takes-- according to the theorem, this also takes O of N time. So basically, you only have to take O of number of parameters time to compute the gradient. Any questions so far? Would you mind explaining why l is the level of prop? Because it seems like l is the dimension of the input already. So they sound pretty different constantly. So here, my-- that depends on how you might apply this theorem to this setting. So here, I'm going to view this as a function of the parameters. So the parameter is my input to this function. I'm going to differentiate with respect to the inputs, right? So the theorem is a generic theorem. So there are some function. And you differentiate with respect to the input of the function. It depends on how you use this theorem. So if you use this theorem, you say the theta corresponds to x there, so then, what is l? l is always dimension of the x right there. So l is dimension of the theta here. Thanks. Yeah, any other questions? So I guess the plan for the next-- rest of the lecture is that we're going to show you how this works for neural networks. I'm not going to show you how this works for our general circuits. It's actually not much more different if you do it for the general circuits. But it's just like, the whole thing is you need to have a lot of jargon to really prove this with the general circuits because you have to consider how-- all the generalities. So basically, I'm going to show the concrete example for neural networks. How do you compute a gradient? And when you see the concrete examples for the neural networks, you see how you do it for the general circuits, which I will probably discuss if I have time at the end. And another thing, before I talk about the theorem here, I guess I'd like to say that this is also, this theorem is also, the basis for many of the so-called second order method. So basis. And also, for example, for meta-learning. I guess we haven't introduced meta-learning. But I think the point-- the part that is relevant is that-- so actually, I'm not going to talk about what meta-learning in this course, just because it requires more backgrounds. But the general idea is that you can use this theorem twice so that you can do something about the second order derivative. But there's a specific setting you can do. So basically, I think-- let me just give you a sense of what I mean here. This is not comprehensive because, sometimes, you're going to use this theorem in different ways. But this is one way to use this theorem in a clever way to get something more than what it offers. So for example, suppose in the same setting, you can claim that for any vector V of dimension l, this is the same as the input dimension of x, then this, the Hessian, the second derivative, which is a matrix, this is I by l matrix times this vector V, can be computed in O of N plus l time. And as I said, N is bigger than l. So this is still O of N time. So you can see that this is-- now it's more kind of a little bit more magical, right, because, if you want to compute this matrix, typically, it takes-- even just to memorize this matrix, even you just want to compute each entry-- suppose each entry of this matrix takes O of 1 time, then you already spend l squared time to compute this matrix. So certainly, if you wanted to get O of N plus l time, your algorithm cannot just first compute this matrix and then take the matrix vector product. That would be too inefficient because computing this matrix itself will take l squared time. So however, if you compute that whole thing without going through-- without doing the matrix first, and then take the matrix vector product and compute the whole thing altogether, then there is a way to speed it up. And you can do it in O of N time. Well, N is bigger than l, basically. So basically, you just take O of N time. If we're going to do that often, then why can't we just use Newton's method? Is that [INAUDIBLE]? Right. So Newton's method, depending on which Newton method you are talking about, the vanilla Newton's method requires this matrix. Then it's l squared time. So we only know how to do matrix vector-- Hessian vector product. So that's why, I think, at least there are a bunch of papers in the last few years, which tries to implement the original Newton's method without computing the Hessian. So they try to use the Hessian vector product to have a different way to implement the original Newton's method. And that was somewhat successful. I think you can have some new algorithms that tries to only use the Hessian vector product to implement approximate version of the Newton's method. Yeah, but it's not trivial. It's pretty nontrivial. You cannot do it just in the most trivial way. Right. And actually, this corollary is actually pretty easy to prove. And when you prove it, you can see how it's actually-- actually, the proof also tells you how to implement this. So basically, the way to prove it is that you just say, I'm going to define a new function, g of x. This new function is going to be the gradient times v. So this is a real valued function because this is a function that maps I dimensional vector to a real vector. The output is real vector, even though you-- there are some-- sorry, the output is a real valued scalar, right? Even though these are vectors, you take any part of that, it becomes scalar. And then, so if you use the theorem, so by the theorem, you know that g of x can be computed in O of N time, just because then the gradient can be computed. So that's why, after the gradient, you take the inner product. You spend another O of N-- actually, O of N plus l time, just to be precise, because you compute the gradient, it takes O of N time. And then, computing the inner product is l time. You get O of N plus l time. And then, you use the theorem again on g. So before, you are using the theorem on f. And you say the nabla f can be computed. Now you are going to use the theorem on g. So you view g as the new f. So you view the f as the-- what's the right way to say it? So this g will be the f in the theorem, right? Then you verify whether g satisfies the condition. That's true because g can be computed in O of N plus l time. And then, that means the nabla of g can also be computed in O of N plus l time. All right. So this means that nabla g also computed, can be computed, in O of N plus l time because you can do this twice, right? And what is the nabla g? The gradient of g of x, if you do the computation, what's the gradient of this? It's really just, you take another gradient here. I guess this probably requires a little bit-- you can verify this using chain rules offline. But the gradient of g is really the gradient of the inner product of f x and v. And this is actually the Hessian of f x times v. Any questions? And you can do these for many other things where, for example, if you define any functions of the gradient, you can still take the derivative. For example, I think in meta-learning, sometimes you have to have maybe some special function of the gradient. And then, you have to take another gradient of it. And that's sometimes a submodule used in meta-learning. And then, you can apply this again. You can say, OK, because the gradient is computable, in O of N time this function is probably just a very simple function. So the whole thing can be computable in O of N time. And then, you can take another derivative again. So that's when-- I'm not going to go into more details. But that's how people are using it in meta-learning as well. So anyway, so this part is advanced material. We are not going to test it in any of the exams or homeworks. But just, I feel like this is good to know to some extent, just because this is used in many other more advanced situations. Any other questions? Cool. So I guess now, next, what I'm going to do is that I'm going to discuss a concrete case with a neural network. I'm going to start with two neural networks. And then, I'm going to talk about deep neural nets. And basically, I'm going to apply the auto differentiation. The proof of the theorem is, basically, you have to give auto differentiation algorithm, also called back propagation. So basically, I'm going to apply the back propagation to the specific instance, neural networks. OK. So I guess, let me set up some-- so before talking about neural networks, maybe let me briefly discuss a preliminary. This is the chain rule. I'm somewhat assuming that this is covered in most of the calculus class. But I'm going to have to review this in here, partly because I'm going to have potentially different notations from what you-- not exactly the same notations from what you learned from the calculus course, just because different calculus book has different notations. So for the purpose of this course, what I'm going to do is that suppose I have a-- suppose J is a function of theta 1 up to theta p. I'm trying to use as close notations as our final use case, even though this part is supposed to be abstract. So all the notations are just some symbols. I'm trying to use similar notations, just to avoid too many confusions. So suppose you have J is a function of a bunch of parameters, a bunch of variables. And then, suppose you have some intermediate variables. Suppose there is a-- so how does this function work? This function works by-- I guess, maybe let me just-- this function is defined in this following form. So you have some intermediate variables. So maybe I say there are j1, which is a function of the inputs, theta 1 up to theta p. And also, you have up to jk, which are j-- some functions of this. So I guess I'm not sure whether these formulas make any sense. So I'm using the ji as both the function or-- and the variable of the output of the function. I think this is pretty typical in most of the math books. So you can view j as a variable. And then, this j is a function of the theta 1 up to theta p. And which function it is? You still use j1 to describe that function. So does it make sense? Yeah, and then, just because I don't really necessarily care about exactly what the function is, so I'm just going to give a name. And this name is just called the same thing as the variable. And then, J is a function of these intermediate variables. And once you have this two-layer set up, then suppose you care about the derivative. The chain rule is saying that if you care about the partial derivative of the final output with respect to some input theta i, then now how do you compute this derivative? You can do it with the chain rule. So what you do is you say you are going to-- you numerate over all intermediate variables. So you take the sum over j from 1 to k. And for each intermediate variables, you first compute the derivative, which is back to the intermediate variables. And then, you multiply this derivative with the derivative of intermediate variables with respect to the variables you care about, theta i. And just to clarify dimension, so this is in R. And everything is in-- everything is a scalar so far. So this is the chain rule, just in the language of this course. Any questions? So now I'm going to talk about a concrete case. And maybe, let me also define some notations. So maybe one important thing to note is that every time, at least in the context of this course, every time you have this partial derivative thing, we insist that this quantity, the quantity on top, is a real valued function. So we are not considering anything about multivariate outputs, not because you cannot consider them. It's more because, typically in machine learning, if you are taking derivatives of a multioutput function, multivariable function, computationally, it's a problem. So in some sense, I think in machine learning, people try to avoid that. And at least on the algorithm level. Of course, in analysis, probably you have to think about that notion. But on an algorithm level, like 90% of the time, you always try to take the derivative of a real valued function. You never try to take derivatives of multioutputs, multivariable functions. And in this course, we only have to think about the case where this is a real value function. So that's why in the notation, so the notation here is that, so suppose J is a real valued function, valued variable of function. Then of course, it's clear what this would mean. So if you take this with-- back to some variable theta, that just means the partial derivative, that's easy. But what if you have a vector? So suppose if A-- so this is easy. This is just-- so if theta is a real valued vector, this is just the partial derivative, which is easy. And what if A is-- is it the green one it's not great? Maybe I'll use the black one. So if A is some variable in dimension d, then this would be also in dimension d. That's our notation. I'm just clarifying a dimensionality, right? And if-- so basically, this would be just a collection of scalars, the partial derivative of J with respect to A1 up to the partial derivative J with respect to Ad, which is dimension d. And we are also going to get into the situation where A is a matrix. Maybe A is a matrix of d1, d2. And now, what does this notation mean becomes a little bit tricky because, sometimes in different literature, there are a little bit different conventions. For this course, what we're going to do is just that this has the same shape as A. So this is also just all the partial derivatives. And this is the same as this, has exactly the same shape as a itself. So basically, in this course notation, these kind of derivatives or gradient, whatever you call it, right, so they always have the same shape as the original variable. All right. OK. So I think I need to-- so now let me talk about the two layer network. This is just a review. I think Masha has talked about this. So you have something like-- so if you-- the loss function is evaluated by the following. So if you evaluate a loss function, what do you have to do? You have to first compute the output of the model. And then, you compare it with the label. And you compute the loss. So basically, the computation to evaluate a loss function is the sequence of operations. So you first compute the so-called intermediate layers by doing something like this. This is the first layer of weights times the input, the data, times-- plus some bias variables. And so let's say this is of some dimension m. And then, you apply some entrywise ReLU. And you still maintain the same dimension. And then, you apply another layer. Maybe, let's say-- another layer, let's call it O. The output of the nonlayer is called O. So you apply W to a plus b2. And this layer becomes the out-- this is the output layer. So you want to make this one-dimensional. So you get R1. And often, sometimes, this is called h theta of x. This is the model output. And then, you compute the loss. The loss is something like By the way, I'm here only having one example, x and y. X is the input. y is the output. OK? So that's just a quick review of the two neural networks. I think I'm quite sure that we are using the same notation as Masha. Hopefully. [CHUCKLES] So now the question is, how do I compute the gradient with respect to W, W1, W2, b1, b2? [INAUDIBLE] for the-- in general, how many of the samples will be using the gradient for? For all of them? Or it will be just the value of a gradient for a few samples, like in batches in the [INAUDIBLE]?? Right. So that's another layer. So that's when you-- that's about how you implement this algorithm, right? So if every time, you just select, take one example, you just have to do it for one example. If every time, you take a batch, then you have to do it for a batch. But for the purpose of this lecture-- this is a good question. But for the purpose of this lecture, we only care about one example because, eventually, you just do all the-- for all-- even you have 10 examples, do all of them the same way. Yeah. Yeah, you just repeat the calculation. Of course you can parallelize all the gradients of all the examples. But the method is the same. [INAUDIBLE] So how do I compute a gradient of this? The gradient of the loss with respect to the parameters. So what I'm going to do is that I'm going to-- so go ahead. Yeah, sorry. Just a question. Why is h theta of x equal to 0? Oh, no. Yeah, this is a variable called O. [CHUCKLES] I think I probably should-- how do I-- yeah, I should change the notes. Yeah, this is fine, the LaTeX, right? But how do I write this so that it does not look like a 0? [SIDE CONVERSATION] I think it's OK to-- let's see. So is there any chance that I can do it on the fly to change the notation? I think I should be able to. What will it let me change it to? Sigma [INAUDIBLE] Sigma is going to be used. It's going to be used for some other things. Tau. OK. But just, this will make a-- introduce some inconsistency with the notes because, in the notes, this is O. [CHUCKLES] Just to let you know. OK. But I probably should change that in the notes as well. And also, you see that this one doesn't show up that often. [CHUCKLES] Hopefully. OK. So what I'm going to do is that I'm going to try to use some chain rule with this, right? So how do you use chain rule? So for example, if you just look at one entry of this matrix, this is a matrix. You look at 1 and 2. You can use the chain rule to derive the derivative of that entry. So that's the typical way that we do it. It's going to be a very complex formula. But you can still do it. So you use the chain rule multiple times and then compute the derivative of that entry. And then eventually, you get a lot of formulas. And then, you try to regroup those formulas into a nice form. So that's actually written in the lecture notes. If you're interested, you can look at them. It's pretty much a brute force computation. Of course, if you do that brute force computation multiple times, then you get a little bit kind of like experience. And you can do it faster in the future. So what I'm going to do is that I'm going to try to keep everything as a vectorized notation. And still, I'm going to do chain rule. But I'm going to do a more vectorized of the chain rule. So because of that, I'm going to have this kind of abstraction. So this is-- in some sense, I call this chain rule for matrix multiplication. I guess there's no unique name for this. This is just invented by me. But this is really a simple fact. So I'm going to suppose z is equal to W times u plus b, where, let's say, so W is a matrix of dimension. I say m by d. And u is a vector of dimension d. And b is a vector of dimension l. We don't have to really necessarily care about the dimensions because at least as long as they match, then it's fine. And then, I have some function applied on top of z. So this is my abstraction. So you can see this is a little bit like this because if you map this z to this z, and this W1 to this W, and x to u, and b1 to b, then it's kind of like that. It's a abstraction of a part of the problem. And then, I'm going to claim that then, the derivative with respect to the W, which is what I care about, at least if you map it back to here, we care about derivative with respect to W, so then this is going to be equals to what? It's going to equal to a derivative with respect to jz times u transpose. And maybe, just to make sure you are convinced that a notation-- the dimensions match, so this is supposed to be in R to the m by d because the derivative, as I said, should have the same dimension as the original variable W. So W is the N by d. Then this is m by d. And this is in Rm because z is m dimensional vector. So that's why this is in R. And this is in R1 times d because u is of dimension d. And u transpose is of dimension 1 by d. I guess all the vectors are column vectors. So if it's R, m dimensional vector, it's really m by 1. So all the vectors in this course is column vectors. So that's why this whole thing kind of match dimension because this is a column vector. This is a row vector. You take the outer product of them, you get a matrix. Actually, this is a rank 1 matrix. That's actually an interesting observation. The gradient with respect to the weight matrix, for one example, is always a rank 1 matrix. Oh, but for one example, not for all the examples. Not for the full gradient. If you just have one example, the gradient for the weight matrix is typically rank 1 matrix. And if you look at-- and also, we know the gradient with respect to b is going to be equals to gradient with respect to z. So this doesn't solve everything because you still don't know what this quantity is. This quantity is unknown here. And this quantity, the same quantity here, is unknown. But at least they solve the local part. It's like chain rule, where it says that if you want to derive-- take the derivative with respect to W, then you only have to know the derivative with respect to intermediate variable z. And then next, I'm going to talk about how to take the derivative with respect to z. But this is a decomposition, right? So it says that if you want to know the derivative with respect to W, then you only have to know the derivative with respect to z. Can you explain again why [INAUDIBLE]?? This is just what a formula tells me. But here, I'm verifying that it does make sense because this is a 1 by d matrix, And this is m dimensional vector or m times 1 dimensional vector because I view all the vectors as column vectors. So that's why m by 1 times Now why is it transpose? That's just because-- [INAUDIBLE] you want the dimensions to match [INAUDIBLE]. Oh, I think the fundamental reason is you do the calculation, it's exactly this. But it just happens to match. It has to match if the calculation is correct. And also, maybe that's a reasonable way to memorize it. [CHUCKLES] You have to make the dimension match. So that's why it's the case. So how do you prove this? So proving this, you have to go low level. You have to do it for every entry. So I'm going to show it once. And then later, I'm going to have more abstractions like this. And then, I'm going to prove it for you. So for this one, I'm going to do a quick proof. It's really just a derivation. So what you do is you just use the chain rule. So what you do is you look at the derivative, derivative with respect to any entry, Wij. And how do you do this? You say-- you use the most basic version of the chain rule. You loop over all the possible intermediate variables. So what are the intermediate variables? These are z's, right? It sounds like it's very natural to use z as the intermediate variables. So I'm going to loop over all possible intermediate variables, k from 1 to m. And each of the zi is one of the intermediate variables, or zk. So dj over dzk and then dzk over dWij. All right? And then, I'm going to plug-in the definition of zk. So then, what's the definition of zk? And zk is is defined by this matrix multiplication. What is the kth dimension of here? It's the kth dimension of this. The case dimension of this is going to be something like the definition of matrix multiplication, Wk1 times u1 plus Wk2 times u2, up, dot, dot until Wkd times ud plus this b, the bk. And you look at the part of the derivative of this with respect to Wij. So how to do this. It's like-- so first of all, no, to make this non-zero, you have to make sure that this variable at least show up in the top. If the variable doesn't even show up on top, there's no partial derivative. The partial derivative will be 0. So this is only non-zero only if this Wij does show up in the top. So when Wij shows up on the top, only if k is equal to i. So because only Wk something show up on the top and Wij. So only i and j are the same. r-- i and k are same, then Wij can show up on the top. So that's why you only have to care about those cases where k is equal to i. So you just-- the entire sum is gone. So you only have to care about zi. And then, so you have-- maybe I'll just do it slowly. So this is just Wi1 times u1 plus to Wid times ud plus bi or dWij. So Wij only show up once here on the top linearly. So Wij show up here on the top in this term somewhere in the middle. So there is a middle term, which just look like Wij times uj. That's the term that's Wi-- where Wij shows up. And the derivative of this with respect to Wij is equal to uj. So that's why this is equal to, sorry, times uj. So that's my dj over dWij. And then, I have to group all of these into a matrix form. So if you verified that, so basically, you group all of these entries into a matrix form, it will be like this. So you're going to get all of this, dj over dzi, into this dj over dz term. And then, this uj term will be grouped into this u transpose. So I guess I'm using a very simple fact. The simple fact is that if something, maybe let's say xij, is equal to ai times bj, then the matrix x is equal to a times b transpose. That's the simple fact I'm using. So if some-- if the entry of a matrix is equal to some ai times bj, then you can write it as matrix form, where then this matrix x is equal to a times b transpose. a is a vector. b is a vector. Any questions? OK. So now I'm going to apply this abstraction, this so-called vectorized form of the chain rule. So my problem here, so what I got is that, so what's the mapping? The mapping is that z maps to z. And W1 maps to W. x maps to u. And J maps to J. So that's how I use this abstraction. And after I use this abstraction, what I got is that I got W1j dj over dW1 is equals to dj over dz times u transpose will be x transpose. x transpose. So of course, this is not done because we want to compute this. But we have a reduction in some sense. So we did some partial work. Our goal is to compute this. But now we said that you don't have to compute this, dj over dz. And next, I'm going to show you how to compute dj over dz. So it's like you are peeling off every-- a layer by layer in some sense. Of course, you can also do the same thing for b. I guess the b is always easier. So this will be just this. So next question is, how do you do the-- let's see whether I should-- maybe I should use a new board. So next, I'm going to compute this. From-- the z is this z So how do I do this? I'm going to have another abstraction. So this is the abstract problem. So note that my z, the relations between J and z is through this a and W2. So there are some complicated dependencies between J and z. So I'm going to abstract one part of it, just to make our derivation clean. So that abstraction would be that you think of-- suppose you have a variable a, which is sigma of z. And then, J is a function of a. And sigma is this entrywise, sigma is entrywise, activation function. Sigma is the s, basically. So I'm going to claim that in this case, you can-- you know that your target, the question you care about, dj over dz, is going to be equals to dj over da times sigma prime z. And this is a-- so I guess, let me explain the notation here. So this is our m dimensional vector. This is also m dimensional vector. Let's say in this abstract in z, a, they're are all m dimensional vector. So then, these are all m dimensional vector. And this is the so-called entrywise, entrywise, product. So I'm taking two m dimensional vectors. I take entrywise product. And that gets the derivative I care about. And then, you can see that the only thing you have to care-- you have to compute next is, what is dj over da? Because dj over da is still-- I know. I'm not going to do this proof for this. It's actually even easier than the other one. You just have to expand and do the chain rule. So now, next, let me try to-- so now I'm trying to deal with dj over da. So how do I deal with dj over da? I'm going to, again, abstractify this part of computation in some abstract form. So my abstraction is that, so the abstraction is that I guess I'm going to have a is equals to W times u plus b and J is equals to a function of a. So why this is a reasonable abstraction? Because a maps to this a. W maps to this W2. And this u maps to-- wait, am I doing something? Sorry. My bad. I have to use a different-- maybe, let's call this tau. That's a perfect place to call it. So tau-- so why this is a useful abstraction? This is because the mapping in my mind is that tau means the tau above. And the W here means the W2 above. And b here means the b2 above. And J means the J. So note that the difference here is that this W now means the second layer. And if you make this mapping, so then what you care about is that you care about dj over du because u-- oh, I guess I didn't say what u corresponds to. So u corresponds to a. So u corresponds to a because a is what is multiplied with the matrix. So dj over du, so that's what I care about. So you can see that even though this abstraction is very similar to this abstraction, the difference is that here I care about dj over du. I care about the derivative with respect to the input of the matrix multiplication. And before, I care about the derivative with respect to the matrices, the matrix in the matrix multiplication. All right. So that's why it's a bit different. And you're going to have a different formula for it, of course, because you are taking the derivative with respect to-- y is in respect to W. And the other is in respect to u. All right. So what's the formula for this? The formula for this is, if you write it in the matrix form, W transpose times dj over dv. d, sorry, tau. So I guess if you check the dimensionality, then this one-- I don't know what the dimension here. So I guess, let me specify a dimension. So W is maybe something like-- I don't know. Let me come up with the-- maybe R times m. And then, u is of dimension m. And tau then has to be in dimension m-- sorry, dimension R. Sorry. So then, this is in dimension R. And this double transpose is in dimension m by R. And that's why they can be multiplied together. Of course, I guess another thing is that if you don't want to remember all of these equations-- you want to remember this. So first of all, you don't to remember all of them. Second, if you want to remember them and you want to cheat a little bit, you can just view them as scalars. And you can see then, so for example, this one would make a lot of sense if they are scalars. That's just the trivial chain rule. And this one makes a lot of sense if they are all scalars because you want to take the derivative with u, with respect to u, then W have to show up as the coefficient. So everything makes a lot of sense if they are scalars. And the only tricky thing is that if they are matrices, then you have to figure out, what's the right transpose if you left multiply, right multiply, so on and so forth? So once I have this thing, then I can apply this to the special case above. If you apply it, then what you get is that with this mapping, then you got dj over da is equals to W2 transpose times dj over d tau. That's because-- I'm just replacing the note here. I'm just applying this general thing to this case. And now you see that there's only one thing that is missing. What is dj over d tau? And that's trivial because dj over d tau is just-- this is really just the very last thing. J is just a y minus tau squared. So dj over d tau, everyone can compute. This is just, I think, minus y minus tau. So what do you really do? Eventually, then what you really do eventually is that you first compute this. Maybe this is step one. And then, this is step two. You compute dj over da. Then where dj over da is used. And then, dj over da is used here. So then, you do this step three. And then you get dj over dz. dj over dz, they used it here. This is 4. Wait. Maybe 4 is here. Then you get the derivative with respect to W1. So when you really implement it, you have to do it backward. When you do the derivation, it's reason-- we can do anyway. But I guess I'm doing it in this way from W to-- from the lower layer to the top layer. Sorry, from the first layer to the second layer. But when you do the implementation, you have to first compute this, and this, and one, two, three, four. I guess I didn't compute the derivative with respect to all the parameters. So I'm going to compute derivative with respect to W and b1. What if you want to compute the derivative with respect to W2? Maybe that's a good question to see whether it's somehow you digest this. What if you want to do the dj over dW2? The chain rule states the derivative is with respect to tau and then times the derivative of tau with respect to W2 and-- And for example, which abstraction do you want to use? I have three things, one, two, and three. Which one do I use? Yeah, I think your answer is correct. So I guess, I think you only use this because w2 is the matrix. You care about derivative with respect to matrix, then you want to use this. So basically, so if you care about this, then you view this W2 is the W here. So I guess, then you need a different mapping. So I guess you need to use the first abstraction, the first dilemma. And then, you're going to say W2 corresponds to W. And the u now corresponds to what? u corresponds to the a here. And b1, b2 corresponds to the b. And z corresponds to-- sorry, not z. So the tau corresponds to z. So this is in the-- the right-hand side is in the abstraction. And the left-hand side is in the real case I care about. So this means that this is dj over d tau times-- I had u transpose here. Now I had a transpose here. So I'm going to have a transpose here. And I also know dj over db2 is going to be equals to dj over d tau. Make sense? So now if you have all of this, then how do we-- you-- I guess this is still a little bit complicated. But it's a little better than before because everything is now vectorized notation. And now, what if you do it for multiple layers? You'll see that everything just stays just the same. So you basically are just going to repeat it, use these two Lemmas, so the three Lemmas for the three abstractions. I guess maybe I should number them in some way. Maybe, let's call this Lemma 1. And then here, I'm using Lemma 1. And maybe, let's call this Lemma 2. And this is Lemma 3. So for the deep neural networks, I'm just going to use Lemma And I'm going to get the gradient. So that's the last thing in this course-- in this lecture. So let me first recall the deep networks. So suppose you have multiple layers of neural networks. So then, I guess using the same notation as last lecture, the first layer is this. The second layer is some relu times z1. Sorry, this is still the first layer. This is activation. And then, you do the second layer. You have W2, a1 plus b2, dot, dot. Then you get the r minus 1 layer, which is relu of z r minus 1. And then, I say you have the r'th layer, zr, which is equals to Wr times a r minus 1 plus br. And then finally, you have a loss. Like I say, I'll just write the loss here, given that the loss will be-- J will be 1/2 times y minus zr squared. So that's my computation of the loss function or sequence of matrix multiplication and the activation functions. So now I'm going to try to compute the derivatives, the partial gradient. So first of all, I'm going to compute dj over dWk for some kth layer. So how do you do that? Can you raise that a bit [INAUDIBLE]?? [CHUCKLES] Yeah, just a little bit awkward. I guess maybe it's OK to ignore all-- you all know this is deep net? So is that better? Yeah, nothing different. This is the same thing as in last time. I'm just recalling the notations. So I want to compute a derivative with respect to kth matrix. So maybe I say, a derivative with respect to W2, I say. Suppose k is 2. Now how do you do it? So if I'm going to do this with respect to W2, then the thing is that you want to use the lemma. So the lemma-- lemma 1, I guess. Maybe that's the most relevant because lemma 1 is trying to take derivative with respect to some W. So you just do some pattern matching. You want to apply lemma 1. So what's the abstraction here? The abstraction here is that if you-- so how do you do the pattern matching? So I guess you say that what's the definition of z-- Wk? Wk is involved in this computation in the following way. So Wk is involved in the following way, zk is equals to Wk times a k minus 1 plus bk. This is how Wk is involved. And then you say, I don't care about what happens next. I just abstractify the rest. So then, this is pretty much the same as the setting, the lemma 1. So then, using lemma 1-- oh, you can-- yeah, using this-- actually, you can call it-- you don't have to call it the lemma. You can call it a formula or something. So lemma 1 tells you that if you take the derivative with respect to the matrix, it's equals to the derivatives with respect to zk. zk is basically the output of the matrix multiplication. So times the input of the matrix multiplication which is a k minus 1 here transposed. So that means that I only have to take the derivative with respect to zk. zk is basically some of these intermediate variables, right? So how do you take the derivative with respect to zk? So you need to think about, how is zk involved in this computation? So zk is involved. So I care about this. And zk is involved in the following way. So I'm not sure. So zk is equals to relu-- sorry, zk is involved directly in the following way. So ak is equals to relu of zk. And then J, the loss, is a function of ak. That's the part that zk is directly involved because you first-- it goes to the relu. You get a. And then, you do some computation to get J. And then, you can use the so-called lemma 2. So the lemma 2 will tell you that, what is the dj over dzk? So it's going to be something about dj over dak times the relu prime times of zk. In fact, this is lemma 2. And the third thing is, how do you deal with ak? Well, what is the derivative with respect to ak? So if I don't know derivative with respect to ak, then you have to see, again, how ak is involved in this whole computation. So how does-- how is ak involved? So ak is involved because ak-- if you-- you only use a to compute z again. You use a to compute z k plus 1. So if you look at that part, so basically, z k plus 1 is equals to some WK plus 1 times ak plus b b plus 1. This is the first time the ak is used and the only time ak is used. And then, you say the rest of the thing is abstracted as a general thing. And then, you say I'm going to use-- now I'm going to use lemma about matrix multiplication. Lemma 3 is also about matrix multiplication. But it's trying to take the derivative with respect to the input to the matrix multiplication. So a is the input. So using lemma 3, I'm going to say, I guess, maybe-- I'm not sure whether I should also make an explicit map. So if you care about the explicit map, then it means that ak corresponds to the u there in Laman 3. So WK plus 1 corresponds to W there. And bk plus 1 corresponds to the b there. And J corresponds to J. I guess zk plus 1 corresponds to tau. So I guess if you just keep pattern matching, I think it's a bit difficult to see the map. Actually, probably the right thing-- the right way to think about it is that you really think about the rows of these things. So z k plus 1 is the output of the matrix multiplication. ak is the input to the matrix multiplication. And WK is the matrix in the multiplication. So that's how you easily map the rows of them. And then, so you get this thing is equals to WK plus 1 transpose. And then-- oh, sorry, I think I have some-- is this-- this should be z. So then, you can see that if you look at these two-- sorry, this is by lemma 3. So if you look at this formula and this formula, basically, this is like a recursive in some sense. So here, you are saying that from dj, dak, you can compute dj, dzk. And here from d-- yeah, basically, you can just recursively use these two to get all of them. So I'll just make it explicit. So basically, with all of this formula, you are-- basically, you already compute everything. It's just, I'm going to reorganize this to make it a little clearer. So what you do is that you say, you start from the last layer, the last layer here. So you first compute-- I guess, maybe let me just describe the final algorithm. So you first compute, there is so-called forward pass. So you compute all the values of all the variables. So in some sense, I've already assumed that it's computed implicitly before. So basically, compute all the z1, a1, z2, a2, so on and so forth. So just by computing all of this network, evaluating the loss, you get everything. So you get all the variables. And then in the so-called backward pass, you compute a gradient. And the way you do it is that pretty much the same as the two layer network. You start with the last one. So you first can compute this with the last layer. And this is trivial because J depends on zr, just in a very trivial way. So this is just minus y minus zr. All right. So this is our starting point. And now you recursively use both of these to get all of the dz of d-- dz over da. So you already have dj over dz-- sorry, dz over dzr. And then, you can use that to compute the previous one using this. So from this, you can get dj over da r minus 1 using this formula because you get, this is W. If I can do it on the fly correctly, I guess this is ar transpose dj of dzr. So you remove the index by 1. And you get to a. And then, you can use the a to compute the z, the dz over da to compute the dz over-- dj over dz. So this is number 1. This is number 2, number 3. You get z r minus 1, which is equals to minus 1. So basically, after doing this iteration, you get from r to r minus 1. And then, you repeat. So then, you can repeat. You can say-- I guess, maybe I should number them by-- so maybe I should number this one by 1. This will be 2, 2.1, 2.2, 3.1. In the first round, you get a r minus 2 using this equation. Maybe, let's say this equation is called 2. Let's call this 1. So you're using 2. And then, you get a, z version using 1. And you do this repeatedly. So you get everything about a and z. And after you get everything about a and z, so basically, after obtaining all the-- everything, like this, it's pretty easy to get the gradient with respect to W because you can just say a is using this. Maybe, let's call this 3. You can say this is equals to-- so I guess you can see that I do need a forward pass because, in my backward pass, I do require a bunch of quantities. For example, it does require, I know the quantity z, r minus 1. So I have to save all the easy quantities, a, and r, and all of this in my memory. And then in the backward pass, I'm going to use them. And why this is called backward pass? In some sense, if you think about the computational flow, I guess this is probably my last point I can make today. So this is-- the reason why it's called back propagation is because you can view this as-- you can see that you are-- the way that you change the index is you start from r and r minus 1. And then, you do r minus 2, r minus 3. So you do this in a backward way. So in some sense, if you draw a computational graph or flow of the computation, I'm not being very formal here, but if you draw it in some sense, then you can view this whole computation as you start with x. And then, you have some matrix multiplication. You need to use the weight, W1 and b1. And you do some matrix multiplication. And you get z1. And then, you do another matrix multiplication. Maybe you do another a. Maybe this is as-- I say this. You get z1. And you get the activation. See, activation. And you get a1. And then, you do a matrix multiplication, which is also W2, b2. And then, you get z2. And you do this repeatedly. And you finally get zr. And you get the loss, J. So this is like the forward pass. And if you think about how to conceptually-- how do you organize the information in this backward algorithm? In some sense, what you do is that I'm just trying to visualize this. But of course, everything, all of this, will be in computer. So in some sense, what you do is that you first compute, you first compute, a derivative with respect to this variable, zr. And then you say, OK, how did you get the z? You get the z by some matrix multiplication. So let me say this is my matrix summation multiplication, where the input of the z is ar minus 1, right? And so then, you take this. And sometimes from this, you compute the derivative with respect to an minus 1. That's my step 2.1. And then, you just keep doing this backward fashion. So you figure out where a come from, a r minus 1 come from. a r minus 1 come from z r minus 1. So that's why you do a backward pass. You say this is coming from a minus 1. That's my step 2.-- sorry, I'm not remembering these correctly. So this is 2.2. That's my step, 2.2. And I keep doing this. Eventually, eventually, I get maybe z1. So that's the so-called backward, back propagation in some sense. And in the middle, you can also compute the derivative with respect to the W. So because the derivative with respect to W is equals to something about the derivative with respect to z, So basically, once you get this quantity, you know you can start from this quantity to get the derivative with respect to WR minus 1, I think, r minus r, r minus 1. And then, every time you get one more dj over dz, then you can get one more dj over dW. So from this, you can get the dj over dW1. So that's why it's called back propagation. And maybe just to say one-- I know we are running out of time. Just one more word about this. So this is a very sequential computational graph. But actually, if you have a more complex computational graph, so which is not as sequential as this, you can do almost the same thing. Basically, you just write all this graph. And then, you figure out how to do the back propagation. And the general way to do the back propagation is exactly the same. You just run a graph backward. The only thing you have to figure out is that, how does this-- what's this relationship? Basically, this arrow. So how does the-- the derivative with respect to a depends on z. So what's the derivative with respect to the input of this module, of this-- you view this matrix multiplication as a generic module, let's say. Right? And the only thing you have to figure out is that, how do you write a so-called backward function? This backward function takes in the derivative with respect to the output of this module and output the derivative with respect to input of this module. And that's the so-called backward function. For example, if you write PyTorch, if you need to write a new module, basically you have to implement the forward function and the backward function. The forward function is, how do you get z from a? And the backward function is, how do you get the derivative with respect to a given a hypothetical derivative with respect to z? By hypothetical, I mean just some vectors. You take in some vectors. And then, you get some vector result. And this is the function you have to implement for the module. And once you have this module, then you can-- then just, everything else is systematic. Yeah, I think we should stop here. OK. Thanks. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_I_Societal_impact_of_ML_Guest_lecture_by_Prof_James_Zou_I_2022_I_Lecture_18.txt | So I'll be telling you about some of the applications of machine learning, especially in health care settings. So I'm assistant professor here at Stanford. My name is James Zou, and a lot of my group works on actually developing and deploying the AI systems for biomedical and for health care applications. So feel free to stop me if you have questions at any point. So what I want to do today is to just first to give you a few examples, right, a few case studies, of what kind of AI systems are we using and deploying in health care settings and also talk about what are some of the challenges in actually building AI systems for health care applications. So the first example I want to give is actually based on this paper that we published a couple of years ago. It's on computer vision system that we developed for assessing heart conditions, right. So the idea here is that there are these ultrasound videos. So if you go to the Stanford Hospital, right, or most of the hospitals, they will take a lot of these ultrasound videos, which is looking at the human heart. And we developed a system to basically read these ultrasound videos and, based on these videos, to assess the different cardiac conditions of the patient. And the systems is also now being-- we developed the system as published. And then we spent much of the last two years in trying to actually deploy this at different parts of Stanford. For example, that's a setup that we have using this at the emergency department here at Stanford. OK, so how does this work, right? So if you think about these ultrasound videos, these cardiac ultrasound, also called echocardiogram, so the example of this is shown on the left here, right? So if you think about the heart as like a power pump, so the standard way to estimate how much power the heart is generating, especially by looking at these ultrasound videos. So there are actually millions of these videos that are being collected every year around the US. And the current workflow is that the cardiologist or the physician will actually look at these videos, and they would try to identify which frame of the video where the heart, this chamber of the heart is actually the most expanded, where it's the largest, and also try to identify the frame where the chamber is the smallest, right? And by looking at how much the volume of the heart changes from going from the largest to the smallest, then they can get an estimate of how much power the heart is producing, if you think of the heart as like a pump. So as you can imagine, that process can be quite labor intensive because it's quite manual, right, because they have to go through the entire video by hand. And then once they find the frame, they have to actually trace out the boundaries of the chamber to figure out the volume of the chamber of the heart. And all of those steps currently is done basically with manual annotations. So this is where we thought these machine learning applications, especially computer vision, can be very useful, right? So we developed a system called EchoNet, right? And what EchoNet does is to basically mimic this clinical workflow, right? So it takes this input, the same kind of cardiac ultrasound videos like the one we see here, right? So it produces a real-time segmentation of the relevant chamber of the heart, which is the one that's shown in blue, right? And in addition to doing this real-time segmentation of the heart chamber, it also produces a real-time assessment of how much power the heart is producing, which is technically called this ejection fraction. So that's what's shown here. So by doing this it really simplifies and automates these different manual parts of the clinical workflow. So I wouldn't go into too much detail of how the algorithm, the details of the algorithm, but here's a high-level overview of what it's doing, right? So it's basically taking input as videos, and they're basically two sort of components of the algorithm. It's like the top arm and then the bottom arm. So the top arm is basically looking through these videos using a spatial temporal convolutional network, right? So because it's a video, so we have both the spatial information for any given frame how big the chamber is. There's also the temporal dynamical information, right? So this is sort of a modification of the standard kinds of ConvNets that you typically have seen on the ImageNet type applications by adding additional dimension corresponds to capture the temporal dynamics. OK? OK, so it's like a three-dimensional convolution in that sense, right? So it's going through that in the top to extract the features. In the bottom, it's basically doing this real-time segmentation, right? So it's basically producing a segmentation of this chamber of the heart that was colored in blue, right? And these two arms will come together at the end to make an actual real-time assessment for how much power or the ejection fraction of the heart for every beat because, once you have the segmentation for every beat, so you can actually then assess the power. So after it does this assessment, so there's a final classification layer where it's actually trying to predict all sorts of relevant and interesting, clinical interesting cardiac phenotypes. So there's the probability of heart failure where you can predict that. You can also assess ejection fraction, which again, is basically how much power the heart is producing. So it turns out that we can also use the same layer to predict all sorts of other functions, like liver or kidney function because it turns out that, once you know how the heart is doing, that you can actually learn a lot about the rest of the body. Yeah, question. You choose the [INAUDIBLE]. So I guess like when [INAUDIBLE] he wants me to do with sequence, you do something like the [INAUDIBLE]. Here, you explicitly pass in three dimensions, at least sometimes. How do you decide [INAUDIBLE]? Yeah, so it's a great question. So here, if you look at these videos, right, so the heart is actually pretty repetitive, right? Roughly about every once a second where the heart would expand and then contract. So there's actually a lot of repetitive spatial information, which actually makes it quite well suitable for these kind of more convolutional architectures which are looking for these spatial patterns or temporal patterns. And here, in particular, we basically are looking at the time scale of basically once every second, right? So we get maybe around, I think, And then so for every second, then we make like one assessment of the ejection fraction and also the assessment of the heart condition for every individual beat, which is about once a second. And then what happens in the end is that you have, for every beat of the heart, we get an assessment of, OK, so how much power is that heart producing and how likely does the heart is have different diseases. The video itself actually could have multiple seconds. You can actually capture many beats. So at the end, we actually do an aggregation across all of those beats to say holistically for the patient then what is the status of the patient. Cool. Other questions? Great. So this is the system now that's actually used and developed here using data from Stanford. And we also test the system, both at Stanford and at other hospitals. So one of the places we tested is actually in a hospital in Los Angeles, the Cedars-Sinai. And we just took the algorithm without any modification and then just shipped it to Cedars, right, and then just see how did it do, right? And we're actually quite happy to see that, even without any modification on a different hospital where they had different ways of collecting these images, collecting videos and data, the algorithm actually had very similar performance as it did at Stanford. So the AUC is quite high. It's about 0.96, right. OK, so that's the first example. Any questions about that? OK, so the second example I want to briefly mention, right, is an application of AI more for telemedicine or telehealth applications. So what is telehealth, right? So in normal settings, where usually if you have some sickness or if the patient has some illness, they'll actually come in to visit the doctors in person. But recently, the last few years, there's been an explosion of the need for visiting and having patient doctor interactions, not in person, but through digital formats, right, especially without having the patients needing to leave their homes. And so you can imagine that, really, the need for these kind of telehealth or telemedicine applications have really expanded, especially during the pandemic, right? So for example, just at Stanford and Stanford hospitals, just over the last two years, there's actually something like a 50-fold increase in the number of these digital or televisits compared to about two years ago. So one of the-- so telemedicine could potentially be really transformative for health care, right? If you can imagine not having to actually leave your home and drive an hour to come to Stanford, right? It's much easier to see doctors and also make appointments. One big challenge, right, with telemedicine in general is the idea that can you actually get sufficient information without having the doctors seeing the patient in person, right? And in particular, oftentimes, a lot of the information that the doctor gets is by sort of visual interactions with the patient. If I actually see you face to face, then you can often examine them quite closely. And that's difficult to do when you're doing this on Zoom or some other video visits. And that's really one of the big challenges of telehealth is in getting high-quality images, right, from the patient to the doctors. And for example, at Stanford there are actually a large number of visits that are wasted. So patient will set up a visit with a doctor, but the doctor is not able to get sufficiently good quality images from the patient. So then you can't really make an informed recommendation or informed diagnosis, right? And then they have to reschedule and we turn that back into in-person visits. So it's actually a large number of hours both by the patient and physicians that's wasted because of this lack of quality images. So what we want to do is to basically see, can we actually use machine learning, so especially about using computer vision, to improve the quality, right, of these images specifically for these telehealth type applications, right, because it turns out that people are very good at taking photos for their Instagrams and for their Facebook. But maybe they're not so good at taking photos that are clinical quality. They're i.e. informative for this clinical decision-making process. So the idea is, can we actually use machine learning to guide people and help people to take more clinically informative images? So that's the motivation behind this algorithm that we developed. It's called true image, which has also been commercialized through Stanford. And so the motivation is quite intuitive, so similar to how online check deposit works. So the idea is that we want something that's very simple that could be run on people's phones. And then would automatically tell people is the image that you're taking, maybe of your skin lesion, is that sufficiently high-quality for your dermatologist or for your clinical visit? If it's not so good quality, then the algorithm actually provides real-time feedback, guidance to the patient on how to improve the quality. Maybe it's just that you want to zoom in a bit more or you want to move closer to the window to get better lighting. So it provides these real-time guidance until they get sufficiently high quality. So yeah, basically the TrueImage algorithm, so it basically runs, it's designed to be run at the patient's facing side, right? So the patient could be taking a photo of their skin, and then they want to use that to send it to their dermatologist for their televisit, right? And the algorithm basically would decide if the photo is sufficiently good quality. If it's sufficiently good quality, then that's fine. The images go through as normal. If it's not so good quality, right, the algorithm would decide how to give a recommendation to the patient. Say how do you actually improve the quality of your photo? In this case, maybe assess, OK, you need to move to a brighter lighting, right? And the patient would retake that, and if it's good enough, then it's passed through the algorithm. Is the setup and application clear to people? OK, so a little bit more about under the-- oh, gosh, maybe I'll just jumped into how well does this work? So the algorithm, we actually conducted a prospective study here at Stanford. Prospective study means there's actually a real-time study where we recruit the patients that use these tools. So almost like a clinical trial. So this is done in the dermatology clinics that Stanford operates. And we tested this on about 100 patients, right? And it was actually quite effective. So by the patients using this algorithm, they were able to basically filter out about 80% of the poor-quality photos which are i.e. photos that they would have sent to the clinician but otherwise would have been useless by the clinician because they are not sufficient high-quality to actually make a meaningful, informed diagnosis. It's also nice that this is actually very fast. So on average, it takes actually less than a minute for a patient to generate a high-quality image by using the TrueImage algorithm. And this is an example of the kinds of improvements that you see here. So maybe this is an initial image that someone would actually really would take and send it to their doctors for these telehealth applications, or for these telehealth visits. And that TrueImage algorithm would identify that this image has the following issues with the blur and the lighting. It makes a recommendation. And then after using the algorithm, right, the patient actually have a much better image that now would facilitate their televisit. So this is actually being now-- it was tested at Stanford in dermatology settings, and it's also being integrated now into the Stanford medical records. Cool. Any questions about this before we move on? Yes. Beyond the dermatology, I know that dermatology is like, probably, a field that is pretty-- this is pretty useful because you can make a judgment just based on [INAUDIBLE].. But beyond that, are there any other readily convertible fields in medicine? I feel like oftentimes you need-- your doctor is looking at your throat or something that's less applicable? Yeah, so it's a good question. So I think dermatology is probably the most immediate application of this, right? There are also a few others of more like primary care settings, where often the doctors actually get more information just by inspecting what the patients look like and how they behave. So this is where it could also be useful. For other settings, where it actually requires taking a biopsy of the patient, for more pathology or cancer diagnosis beyond skin cancer, then the patient will still have to come in to the visit. Yes. Is it appropriate any sort of domain knowledge into identifying what you're actually trying to take an image of? Yeah, yes, it's a good question. So what the algorithm does is it actually does incorporate quite a bit of domain knowledge. So one of the developers on our team for doing this-- she's actually one of my postdoc-- was a dermatologist. So she sees patients one day a week, and we actually did our initial piloting of this in her dermatology clinic. So for example, the kinds of domain knowledge that comes in will be, first, we actually take the image. Then we first segment out just the skin part of the image because if you take an image, there could be all sorts of background, maybe our furniture and chairs. And we don't really care about the quality of those backgrounds. So if you segment out to the relevant human skin. And we identify-- after we segment out the human skin, right, then we also try to map onto what we think are the likely issues with the image, if there are issues. So oftentimes, the things that come in would be like is the image-- does it have enough lighting, right? So the kinds of lighting that's required for a dermatologist is actually quite different from the lighting that's from the standard photos. So that's actually one place where often people make mistakes, right? And another place where the expert knowledge is very useful is in terms of how much zoom is needed. So sometimes, if people zoom in too much, that's actually not so good because then they lose the context of the surrounding-- if you just zoom in only onto your lesion, then you lose the context of the neighboring parts of the skin. Or if it's too zoomed out, you also don't have enough information, right? So there's an optimal zoom, which we get basically actually by-- so the way that we train two images is actually by having dermatologists generating annotations about what is the optimal zoom from a database of images. Cool. Yeah, good questions. So another example that's very quickly mentioned is we've also been developing these algorithms for using machine learning to improve clinical trials. Whereas, the clinical trials are the most expensive part of medicine. Each trial could actually cost hundreds of millions of dollars to run, and really the bottleneck of this entire biomedical translation process. So one place where we found where machine learning can be very useful is in helping these clinical trials or helping the drug companies to decide what are a good set of patients to enroll in a given clinical trial because you want the patients to be that they're diverse. So they really cover diverse populations. And also that the drugs are likely to be safe and effective for that set of patients, right? So this is basically a tool that we developed called Trial Pathfinder and for helping to guide the designs of the clinical trials, specifically the designs of which cohorts of patients are eligible to participate in the clinical trial. And this is being piloted now by some of our collaborators and partners at Roche Genentech, which is the largest biopharma company. And if you're interested, the more details are described in this paper. Good. So now that we've talked about a few examples, right, of where machine learning can be used in health care settings and where I think it's having a substantial impact, I would also like to discuss some of the challenges and opportunities that arise when we actually think about deploying machine learning in practice. In 229 we talk a lot about actually how to build these algorithms, right? And there's also a lot of interesting challenges after we build them. Think about how do we actually deploy and use them in practice. So just to set the stage, I want to give you a concrete example, which is like a little detective story. So here's the story, right? I mentioned these dermatology AI applications. So dermatology is actually one area where there's been the most intense interest and investment in developing AI algorithms, precisely because the data there is relatively easy to collect. And oftentimes, these algorithms will work as follows, where you take your photo maybe of a lesion. You can take your phone and take that photo. And then behind the scene here there will be some sort of-- often a convolutional neural network that looks through this photo and try to classify is this likely to be cancer or not. So in this case, it actually predicts that it's likely to be melanoma, so it's skin cancer. And the recommendation is that the patient, the user, should go visit the dermatologist as soon as possible. OK, so the reason why this is useful is that there are actually several millions of people every year who have skin cancer but are not diagnosed until it's too late. And with skin cancer, if you diagnose it early on, then it's actually very treatable. But if it's too late, then it's deadly, right? And potentially, many of these people they actually could have made earlier diagnosis because many of them have access to be able to take these photos. So that's the reason why there's a lot of interest now both by academic groups and also by commercial companies, like Google, in pushing out this kind of AI for dermatology applications. So of course-- yes, go ahead. One question, what's the target going to do [INAUDIBLE]?? This is like the ordinary patient, or it's actually the doctor because I mean if there's something growing on your skin, [INAUDIBLE] it's dangerous to actually show and not do something [INAUDIBLE].. So what is your target base? How do you take care of the region? Yes, good question. So there are algorithms that are people are putting out that are consumer-facing. There are also algorithms that are more clinician-facing. So most of these ones here are actually more consumer-facing. Oh. And the reason is that, to actually make an appointment to see a dermatologist could be like a three month or six month wait time, yeah. Whereas, maybe people, they don't want to make a visit every time they see something because they don't know if it's likely to be serious or not. So it's basically for that kind of applications where there are a lot of these consumer-facing algorithms are being put out. OK, thank you. Yes. Just one question. What's the [INAUDIBLE]? Is that overall accuracy or just focusing-- because we don't want people that actually has the cancer signal, but because of that has like [INAUDIBLE]?? Yeah. So that will be-- what if the absence of [INAUDIBLE]?? Is that possible that is it is a cancer [INAUDIBLE] maybe [INAUDIBLE]? Yes, yes, it's a good question. So here I'm showing you the AUC of these algorithms from their original publications and papers. And in addition to AUC, people also care quite a bit about the sensitivity and specificity, which I also mentioned in the bit. And in particular, I think the sensitivity is probably the important thing here. A sensitivity here means that, if the patient actually has skin cancer, how often would the algorithm say that they actually do have skin cancer? So that's actually really the important part. If a patient doesn't have skin cancer and the algorithm says you have skin cancer, that's not great. But it's actually not too bad because maybe they'll get it checked, and they say it's OK. But if you actually miss the diagnosis, then that can be potentially more damaging to the health. OK, so given that there's a lot of interest in these algorithms, certainly we're interested in thinking about how to potentially use them and deploy them here at Stanford, right? So we actually took three of these state-of-the-art dermatology AI models. They're all solving this task of a given photo, predict whether it's malignant or benign, skin cancer or healthy. And we tried them out here at Stanford. So if you look at the original algorithms, they will have very good performance. So AUC is very high. However, when we tested them at Stanford on real Stanford patients, the performance is certainly dropped off quite a bit, right? It's much worse. The AUC dropped from So that's the setting of this little detective story. So what we want to understand is why did this happen? Why did these algorithms perform so poorly on Stanford patients because we really need to understand that if we really want to be able to use this in a responsible way in practice. And just to be clear here, so these are actually just images from real Stanford patients, right? There's no adversarial perturbations or attacks down on top of these images. So before I tell you what we found with this, would people actually like to guess? What do you think are some potential reasons why the algorithm's performance dropped off so much when they're applied on the real patients? Any ideas? [INAUDIBLE]? So when it was in the in-patients it was 0.93 in [INAUDIBLE] and Stanford version, it was 0.6. But is that Stanford patients or the real patients [INAUDIBLE] that was taken [INAUDIBLE]. So 0.93 was the performance of these models on their original test data. So the original test data also came from those real patients that these companies or these groups have collected. But 0.6 is the performance of these algorithms when you apply them to the Stanford patients. OK, is there a timing difference? Was it best to be [INAUDIBLE] taken [INAUDIBLE] people in [INAUDIBLE] different time period? So that could be one possible factor. I guess there are some differences, like temporal differences in data sets. It turns out-- yeah, go ahead. I was just going to guess there may be a distribution of difference in age. If that happened, Stanford patients will be more [INAUDIBLE] college students in [INAUDIBLE] maybe more concentrated there [INAUDIBLE] that you have in [INAUDIBLE]. So that's also a good idea. Maybe there's some age differences between the original test patients and the Stanford test patients. Yes. I was just going to say [INAUDIBLE] patients who are in California. People get more sun, so they [INAUDIBLE] the types of skin [INAUDIBLE] have different [INAUDIBLE] people, so someone might [INAUDIBLE] in California versus [INAUDIBLE].. OK, so that's also a good idea. So maybe there's some changes in the location which drives different distributions of diseases, skin diseases that are more common here. These are good ideas. Yeah, any other suggestions? Yeah? I don't know whether that's the case, but maybe the quality of the image would matter? So that's also a good idea. So maybe there's some differences in what kind of cameras were used or how the quality of the images across the original test data and the data here. So these are all good ideas, but there's actually a couple of really big factors that haven't been talked about yet. People want to say more? Or assuming that they have their training processes and operate their train, validate, test sets, since they're not doing anything weird, like, actually seeing test data, is that a factor? So maybe there's some question about whether the models are being overfit or not to the original test data. OK, yeah, so these are all excellent suggestions. So we actually did sort of a systematic analysis, an audit, to figure out what happened here. And the goal of this audit, mathematically, is that we really want to explain this drop-off than performance. So we see that, this drop-off from 0.93 to this 0.6. We want to understand what are the factors that statistically explain this difference in the model's behavior? So it actually turns out one of the biggest single factor is actually label mistakes, right? So what does that mean? So if you look at the original test data, the data that was used to evaluate these original algorithms in their initial publications, so it turned out that the original test data had a lot of mistakes in their annotations. So what happened is that the test data were generated by having these dermatology images. And then they will have dermatologists to visually look at the image and say, is this benign or is this malignant because that's relatively easy to collect. However, just even having experienced dermatologists visually looking at images can also lead to a lot of mistakes, right? So the actual ground truth come from you take a biopsy of the patient and then do a pathology test to say does this patient have skin cancer or not. So the actual ground truth from the biopsy is basically that the labels that we have here at Stanford. But the labels from the original test data actually had a lot of noise in them because they were just coming from these visual inspections. And this actually explains quite a bit of the drop off in the model's performance. And this is maybe not something-- that's the first thing that comes to mind. If you think about somebody built a test data set and evaluated it, oftentimes, in machine learning, we just assume that's the test data should be pretty clean, should be good. But in practice, the quality of the label itself in the test data can often be highly variable. Depends on the real-world applications, right? So the first question we should always ask is so how good is the quality of the test labels? How good is the quality of the test data? So a second big factor which people mentioned is that there is actually a distribution shift in the different types of diseases, right? So the original test data all had a relatively common skin diseases, again, because it's relatively easy to collect. The data here we have at Stanford had both common and also less common diseases because all sorts of people come to Stanford to get treatment. And the algorithms perform worse on the less common diseases. And because of this distribution shift, it also explains some of the drop-off in the model's performance. So the third factor is that actually it turned out that these algorithms had significantly worse performance when applied to images from darker skinned patients. So specifically, if you look at the actual sensitivity of model, which as we said, is what we really care about if a patient has skin cancer, how likely is it to find those skin cancer. So the sensitivity is actually much lower when the algorithms are applied to images that come from dark skinned patients. And when we dug deeper into this, it turns out that actually the original training and test data sets had very few, and in some cases, zero images that come from darker skinned individuals. And now, I think the overall takeaway here is that, oftentimes, when we look at the application of machine learning in real world, in practice, it's very difficult to interpret the performance of the model. So if someone to tell you their AUC, it's almost meaningless unless you really know the context of what is the data that's used to evaluate that model. So here I talk about the dermatology settings, but we actually did similar kinds of audits of all of the medical AI algorithms that are approved by the FDA. So as of last year, there's like over 100 medical AI systems that were approved by the FDA so that it can be used in patients. So each symbol here corresponds to one of these algorithms. And here I'm just stratifying them by which body part they apply to. So some of them apply to the chest or to the heart, et cetera. So there's a bunch of interesting findings we have from auditing these algorithms. I just want to highlight maybe a couple of the salient points just for today. So the most interesting thing is just look at the color. So I colored each of these algorithms blue if the algorithm actually had reported evaluation performance across multiple locations, maybe for multiple hospitals. Otherwise, it's colored gray. So it's already quite clear that most of these algorithms, over 90 out of the or we couldn't find evaluation performance across multiple locations. We only see how does it work at one site. OK, in addition to that, so only 4 of these 130 devices were tested using a prospective study. By prospective, I mean more like a human in the loop study. So they have the algorithm, and they tested how does this work in a real setting with maybe doctors or with patients. So the remaining tested on retrospective data. And that means that somebody collected a benchmark data set beforehand. And then they applied the algorithm to that benchmark data set. So the retrospective benchmark data set could actually have come from the same hospital where the algorithm was being trained or developed, right? So as we saw from the previous example in the dermatology setting, so if you only have data from one location that's collected retrospectively, that can potentially mask substantial limitations or biases in these models. OK, any questions? Yeah. Pretty surprising. What is the process that the AI [INAUDIBLE]?? That, it seems like this is pretty important [INAUDIBLE].. Yeah, it's a good question. This is actually something that we are working together with the FDA on, right? So the FDA has a quite rigorous process for evaluating drugs. For example, for the COVID vaccine to be approved by the FDA, they have to run a very large-scale, randomized clinical trial to show that the drugs are safe and effective. The evaluation standards for medical AI algorithms by the FDA is actually quite different compared to drugs. So for example, these algorithms, they do not have to go through a prospective clinical trial. It doesn't have to be randomized clinical trial. So that's why many of them were tested on these just retrospectively collected benchmark data sets. So one of the interesting challenge going forward is to figure out what's the proper way to evaluate and to monitor these algorithms in practice? OK, so now given that these challenges that we saw here, so I just want to quickly go through a few lessons that we've learned or recommendations that we have for how do we improve the reliability or trustworthiness of these AIs, especially as they're being deployed in health care or biomedical context, now based on some of our experiences from both building, deploying, and also evaluating these models. So I'll say a little bit, a couple slides about each of these points. So the first one is that I think, as we saw, that there needs to be a much greater amount of transparency about what data is actually used to benchmark or test each of these algorithms, right? For example, just to give you a concrete visualization of this, we actually did a survey analysis of what are all the different types of data that are used to benchmark dermatology AI models? So each square here corresponds to one of these dermatology AI models, where people have published a paper on this. And so the squares are the models for the papers. So the circles are the data sets. So the size of the circle corresponds to how big that data set is. And there are two colors, right? So the red circles are basically the private data sets, which means that these are data sets that someone-- could be a company or academic group-- has curated. But they're private so that nobody really has access to it. And then the blue ones, circles, correspond to the public data sets. These are the openly available benchmark data sets. So for example, one here is a relatively large public data set. It's often used by many of these algorithms for benchmarking. And what's quite interesting is that there's actually a large number of these algorithms, maybe about half of them, that were mostly tested or entirely tested on relatively small, actually, private data sets. Basically, all of this was on the top right. All right, and those are the ones where it's actually could be potentially problematic because then it's very hard for people to audit and to understand what's going on in the data and what's going on in these algorithms. So we've been very keen to try to release much more publicly available data sets. So I mentioned that we built this cardiology video algorithm. And actually, as a part of that paper, we created and released, I think, one of maybe the largest publicly available medical video data sets. So it has basically all of the videos that we trained and tested on. So we released all of those. There's over 10,000 videos along with the patient's information and annotations. So this is all publicly available so that people can use this for additional research. And I think this is still maybe one of the largest public data set of medical videos. So in addition to understanding what data goes into developing the models, so we're also very interested in thinking about more quantitative ways to understand how do different types of data contribute to the performance or to biases of the models? So what does that mean? So from a machine learning or statistical perspective, oftentimes, you have your training data. Here you have these different-- each sample could be one of these sources of training data, data from a particular hospital, right? And then you have your favorite learning algorithm. So we're model agnostic. Or it could be a deep learning model or could be XGBoost, random forest. And you have whatever performance metric that you care about in deployment. It could be accuracy or some sort of loss, F1 score. And let's say if your model actually gets 80% accuracy, so ideally, what I want to do is to be able to partition that 80% accuracy back to my individual data sources of my training set, right? So I want to say that, oh, how much each of the data points for each data sources contribute to the model's performance? And the reason why that's useful is that, if the model actually makes mistakes in deployment or if it exhibits biases in deployment, as we saw, then we also want to be able to say very quantitatively what specific training image or training source actually are responsible for introducing those biases or mistakes in the model's behavior. So if we can actually do this end to end from the data to the model and then going back to the data, so then this will make the whole system much more accountable and more transparent. OK, so in a bunch of works with my students, we have actually developed approaches for doing this, for exactly trying to do this data evaluation. It's based on this ideas we're calling data Shapley scores. So the idea here is that we're able to compute a score for each training point. It could be a training image. And the score would actually quantify how much does that image contribute to the model's behavior either positively or negatively in deployment? So for example, if we use our dermatology as our running example, so the training set could be quite heterogeneous. It could be quite noisy, as we saw. And the models trained on this could have relatively poor performance when it's deployed in clinical settings. So the data Shapley score that we proposed actually would just be like a number, a score for each of my training image. The score could be negative if that image is somehow not informative or contains some misannotations or introduce some sort of outliers or bias to the model. So the model, if it's trained on that image, actually does worse. And the positive scores indicate that these are the images, the training points that are informative, such that when the model is trained on those images, they actually learn and do better in deployment. So actually try capturing some informative signals. So the Shapley scores can then be computed relatively efficiently on individual data instances. And this is actually quite useful also for improving the model's reliability because one thing that we can do now is to weight my training data by the Shapley scores. So a simple idea, after we compute the Shapley scores, a simple experiment that we can do is to just take the original model and just retrain the model on the same data set. Except now, I'm weighting each data point by their Shapley scores. So this has the effect of encouraging the model to pay more attention to data points that have high Shapley scores, which are, again, the data points that we believe are the more informative or have better annotations. And by doing this, this out here actually can substantially improve how well the model works in deployment settings. And the benefit of this approach is that it's quite data efficient because we just still have the same data set. We didn't have to go out and collect a new data set. We'll have the same data set and actually the same model architecture. The only difference now is that the same model architecture is now being trained on a weighted version of the data rather than the vanilla kind of training. OK, so those are two, I think, quite complementary ideas. Once we want to be much more transparent about where the data is coming from. And with the data Shapley scores, this helps us to understand how do the different types of data really quantitatively contribute to the model's behavior. And by understanding that contribution, this also gives us ways to quickly improve the model's performance. So the third lesson we learned, actually quite important, it's actually really useful to try to really understand why does the model make specific mistakes because, as a general message, if we want to ensure that the AI systems that we deploy are safe or responsible, it's actually really the mistakes that are the most revealing. By looking at the mistakes, we can actually understand what are the potential blind spots or the weaknesses or the limitations of the model. So we developed a bunch of algorithms that try to basically provide more high level natural language explanations for why does the model make specific mistakes as a way to teach us about blind spots of these machine learning models. So here's one example. So as I say, if you've actually put in this image, so the true label is zebra. But if you put in this image, all of the algorithms, some of them will make a mistake and predict this to be a crocodile rather than a zebra. And in this case, we'd like to understand why did that happen. And so the explanation we provide using this tool that we call conceptual explanation of mistakes, so it actually automatically generates a reason for why the model made the mistake in this context. So in this case, it's because there's actually too much water in the image. So in other words, if the image, the same image has less water and maybe more field, then it would actually have gotten-- the model could have gotten the correct prediction of zebra rather than crocodile. So you can view this conceptual-- this mistake explainer as like a wrapper around different computer vision AI systems. So it takes one of these AI systems and looks at the mistakes the models make. And then it provides this high-level natural language explanation for why did the model make that mistake on that input. So this is quite useful because then we can apply our AI explainer, this mistake explainer to also try to explain why did some of these dermatology models make mistakes on these different users, on these different patient images. So here are actually four example inputs where the original dermatology AI classifier made wrong predictions. So the correct diagnosis, the correct label is written on top, and what model's predicted is written on the second line. And in each of these examples our mistake explainer automatically provides our reasoning of why the model actually made that mistake. So for example, in the first example, so we learned that the reason why this model actually made the wrong prediction here is actually directly because of the skin tone. So it's because of the darkness of the skin tone. So in other words, if the skin actually had been lighter, all else being equal, then the model would have actually gotten the correct prediction. In the second example, the explainer learned that it's really because of the blur in the image that led the model to make that mistake. And the same image had been sharper, the explainer learned that then the model would have actually gotten the correct prediction, which is on top. So in the third example it's because of the zoom. So it turns out that it's actually too zoomed out. And that's really the reason why the classifier made those mistakes. And the fourth example is because there's too much hair in the image. And just by actually understanding why the models made these mistakes in each of these specific instances, this actually gives us quite a lot of insights into potential limitations and blindspots of the model. Here, we really learned that potentially it doesn't really work well on dark skins. It really needs to have pretty crisp images. It can't be blurry, and there's a certain level of zoom that it needs in order to make these diagnosis. And also, if there's too much hair in the image, then the models doesn't really work well. And these insights are actually pretty actionable, right? For example, you can then take these insights and then as a guideline to help the users to improve their image quality. So maybe you actually tell the human users like we did with TrueImage, you want to zoom in more, or you want to take sharper images. It could also give us insights on if we want to collect additional data, what additional data should we collect in order to improve the model's behavior across these different-- and improve their weaknesses. Maybe we want to collect more diverse images across different skin tones. We also want to collect more training data with more hair in it. So the last point, what also ties all of this together, is that we really recommend that we need to have much more human loop analysis and testing of these AI models because if you think about how machine learning is often developed, I think it's often optimizing for the wrong objectives because most of the time you have a data set that's fixed, a static data set. An algorithm is optimized with SGD try to optimize for its performance on that static data set. But that's actually not what you really care about. It's what you really care about is actually how well the algorithm really works in the real-world applications, which oftentimes it's not really in isolation but with some team of human users. Especially in health care setting, most of these algorithms that happen here, they're not just working in isolation. But there's often some clinician who takes the recommendation from the algorithm and makes some final decisions. So in the ideal setting what you would really like to optimize over is maybe to an SGD to optimize the model's performance directly for their final usage rather than on their accuracy on the static benchmark data set. But that's actually challenging to do. So to address this challenge, we developed these platforms called gradio, which actually makes it very easy to collect real-time feedback from users at all sorts of different stages of model development. So basically, with one line of Python code, we can basically create a wrapper around any machine learning model. And this wrapper also creates an interactive user interface, which can then be shared, or I guess, a URL with any user. So if they open up that URL, then they can just interact with the model in real time on their browser without having to download any data or having to download or write any code. And by doing this that makes it very easy even for noncomputer scientists to be able to interactively engage with the model, right, to test it out, and provide feedback of the sort that we discussed before. OK. So gradio is actually now-- was recently acquired by Hugging Face, but it's still public. And it's also being used by basically all of the larger major tech companies and many thousands of machine learning teams. It's also what we use to power some of our own deployments here at Stanford. So just to summarize, I think these are the four main key lessons that we learned from our experiences in applying, in building, and deploying, and auditing these models. And I talked a lot here about applications in the health care settings. But I think many of these lessons and applications also apply more broadly in other domains where machine learning is being used. And all of the papers and the algorithms I mentioned are all available on my website. And here, again, are the different references. And I also want to thank the students that led all of these works. So maybe we'll pass here and then see if people have any more questions. [INAUDIBLE] in like data capture, how does that work? You like [INAUDIBLE]. Yeah, yeah, good. So the high-level idea is that we want to estimate the impact of individual data points. And we do this by basically adding and moving this data point across different contexts. So in each context I basically have a different subset of my data, and I say, OK, so what's the impact of adding this particular data point to that subset? If I add this point, does that improve my model's performance after adding a comparator before adding it? And we do this basically across a lot of different scenarios. Each scenario corresponds to a different random subset of my training data. And the reason why we do this across many scenarios is to really capture potential different interactions between different data points. And then we finally aggregate across both these different scenarios to get one single score, data Shapley score for each individual training point. In essence, you have to retrain the model multiple times with different subsets of the data and then evaluate them on your test to see how they impact the [INAUDIBLE].. Yeah, so in principle, if you want to do this exactly, then you need to retrain the model many times. So we actually came up with a bunch of different more efficient algorithms that enables us to estimate the Shapley scores without having to retrain the models. So for example, in some of the applications we can actually come up with analytic mathematical formulas to either exactly or approximately compute the Shapley values without having to retrain the models. Yes. hi. Is this related to the cooperative inquiry concept of Shapley value? That's right. Yeah, so the original ideas for these kind of Shapley values actually came from economics from game theory, where they're the people are interested in ideas of how to allocate credits among different users or among different participants in the game. So imagine if all of us, if we do a course project and we get some bonus. And how do you split that bonus among each of the individual participants so that people don't complain? The people are happy with their bonus. So it's developed in that context of more like game theory credit allocation. And we extended that idea to the data. So now, instead of having individual workers or participants, now everybody brings their own data sets. So data is what works together to train the machine learning model. Basically, the performance now is basically how well is the bonus. It's how well does the model perform? So then we can see, so how do we allocate or attribute the performance of model across individual data sources or individual data sets. Cool. Any other questions? Yeah. It's a cool explanation of mistakes that the labels that it tells you, are those preprovided by the team? Or are those generated actually by the model? Yeah, it's a good question. So what we have is basically we have a library of concepts, like what we call it a concept bank. So basically, we can look at all the sorts of common visual concepts, like the concept of water or concept of stripes or the concept of zoom and color, right? So those are all big concepts. And we create a pretty large library of hundreds of these concepts. Then that's basically the input into the explainer, as the explainer try to see, so what of these concepts would actually be able to explain the model's mistakes? In cases where the concept that we provide are not complete but maybe there's some texture information that actually leads to a mistake but it's not in our concept bank, we have some additional ways to try to automatically learn concepts in a more unsupervised way directly from the data. But most of the concepts we use are actually from these large concept banks that we can just learn ahead of time. Okay. Great. Yeah, then we can wrap up here. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_PCAICA_I_2022_I_Lecture_15.txt | So welcome to our continuing work on understanding unsupervised learning. We're going through two kind of Stalwart algorithms, which are pretty fun today. We're going to finish out PCA, which is this old standby. We use it for dimensionality reduction, sometimes visualization. It's kind of a core canonical unsupervised algorithm. And we'll talk about that today and I'll try and refresh and be a little bit complete there because I know we got cut off kind of in the middle. If we have time-- then we're going to go through ICA, we will have time for ICA. This is a fun problem. This is this cocktail problem. You'll implement it actually. And it's one of the ones that's a highlight every year. People talk about what are the favorite homework problems they did. And the reason we're going to go through it is it's a place where you make a different assumption about how the noise works. It turns out if you make the kind of traditional Gaussian assumption here, which we've been making through class, it breaks. So it's just nice to have an exposure, like there's more interesting things to do. And it's also kind of a fun problem. If we finish those two and there's some chance we do, but if we ask lots of questions, that's actually even better. I'll show you some stuff about weak supervision and I have some slides prepared for that, which are not in the main deck, but if we get to that, I'll post the slides. Weak supervision, the reason I would want to talk to you about it is it's a more modern technique. It's behind some products that you've actually used in the last while. And it's one of these things that looks like an EM-style model, but you can solve it in a very direct way. And since we're not having another exam in the course, you won't be responsible for it. It's a little bit more advanced. And so I can talk to you about how that works without hopefully freaking you out. OK. Cool. So that's what I want to do today. So let's talk about PCA. All right. So PCA, if you recall, we were looking at these unstructured things. I'm just going to run through some of where we were last time. We're looking at the structure. And our structure is a subspace. And it's a non-probabilistic method. So the two things are we wanted some structure that we were looking for underneath the covers. Remember GMM and K.Means, there's a clustering kind of structure. Here, we want to look for a linear subspace. And we wanted the non-probabilistic version of that. We looked at this kind of contrived example where we had pairs of highway and city miles per gallon. And we were plotting cars. And we kind of intuitively would guess, like there are hybrids up here, there are SUVs down here and trucks maybe, and economy cars in the middle. And we wanted to understand, what is a notion of good miles per gallon that took into account both highway and city? And this was a way of motivating effectively that what we wanted to do was to draw a line along the direction that explained most of the variance. That was our intuition most of the ways that they varied inside the data set that was descriptive. So the way we did that is our first mathematical thing is that we centered the data. Recall, we took the data, we subtracted off the mean and literally, last time I just copy-pasted this data and centered it at approximately the center of the data, which is mu. And we transformed it. And the reason we transformed it is we're using a linear mapping. That's what we want to do. And linear spaces go through the origin. So this just lets us use linear spaces and have one fewer degree of freedom when we map them. It's not important really to your conceptual understanding, although you should think about it. But it's really critical if you run these methods. If they're askew, there will be no line through the origin that potentially goes through your data. Once we did this, I want to remind you of some things that we're going to use from linear algebra. We had this idea around, we're going to prove formally what we mean by the direction of maximal variation. And we thought that this line, intuitively, would be that variation. That is, when we projected things on the line, either we had them maximally spread out, that means we captured most of the variation, that's actually what that means, or we saw this dual thing in a second that the residual, the amount of error in our prediction on that line was small. And we'll talk about that again in a second. If you don't remember your linear algebra, remember that you can write once you have an orthogonal basis, you can write any point in that space. In that orthogonal basis. Namely, every one of the vectors that are orthogonal, U1, U2 in this example and their distances along those lines. We're going to recall the math that does that. That gives you a new set of coordinates for your data. If your data were it was given to you in 1,000 dimensions, you could run PCA underneath it and find the direction of maximal variation and maybe its second and third components. And then you could take that which are 1,000 numbers that you were given and compress it down to 3. And that's what we mean by dimensionality reduction. What are the three best numbers, if you like in some sense, that represent your data set? Now just remember that fact that every point can be written in the basis. And really, it's like the coordinates of how far do I go along U1, then I'm going to go along the U2 direction and get to x1. This is alpha 1 and this is alpha 2, as we'll see below from last time. Now the convention is that you U1 and U2 are unit vectors because we only care about their direction. We don't care about their length or their magnitude. And this is just what you do there. And here, u1 was how good is the miles per gallon? And u2 was the difference between their highway and city miles per gallon, roughly. Those aren't formal statements, that's just to give you an intuition of what this basis looks like. So you can have an intuitive feel of what you're doing. You want one direction, which is the principal component of variation. And if you had 1,000 different points, you would then look at the other data sets and say, "what other direction is the second principal component, third principal component?" And if you remember your linear algebra, that is just how you write down a basis. That's all I'm describing. But we have to order those bases in some way. We have to figure out which components do we pick first and among all of them? And that's what we're going to solve in PCA, right? OK. So this is all just saying, x can be written in this form. And we may here, compress from dimension 2 to dimension 1. What that would mean was we would probably just keep the number alpha 1. That would just be, we would treat x as its projection onto this line and just treat it as alpha u1, OK? And that's what we mean by "explains more variation". OK. So we're going to find these directions today with some caveats and we're going to think about thousands of dimensions to tens of dimensions. And this will be a dimensionality reduction because we're going to only keep those components. So before I move on, there's a lot of linear algebra in there. Ask me questions and I'm super happy to unpack this because we're going to assume this information-- we tried it out of last time-- going forward, but I'm super happy to answer questions and derive whatever's here that's unclear. And if you have the question, almost certainly somebody else does. So please, go ahead and ask. Please. What if these alpha, which is basically the coefficients are negative? So they can be negative, right? So for example, in this direction, u1 is going this way. If you were to have a negative, it would mean that it was in the negative direction of that. These are signs. So they can certainly be negative. They're not positive scalars here. That's why we worry about their squares for how much of the residual is because they can be positive or negative coefficients, just like you can have something that first component is positive or negative. And positive tells you to go this way and then negative tells you to go that way. So are all of these coordinates based on the squares of the [INAUDIBLE]?? We don't know yet, right? So what we're going to do is we're going to try and find-- so all we know at this point that we're trying to make sure that we get there is that we can take an x, and we can write it down in a basis. So x has its original thing that it's given to us, in Rn, some large space. We can write it in new numbers, alpha i's for these ui's. And this is a different basis for the same underlying space. And then what we're going to try and do is say, "how do we pick a good set of UI's to represent this?" And what I'm trying to hint at is that we don't have to pick a complete space. We don't have to pick n orthogonal basis vectors, we're going to pick the top K, in some sense. And that top K, as we hope, captures the most variation. And we're going to make that precise in the next couple of write ups. Awesome questions. Please, any others? Just this last question, top K like you will tell us-- Yeah, we haven't gotten there. We haven't gotten there yet. Wonderful point. So that's exactly what I mean. We're going to say, how do we find those directions, OK? And last time we talked about this preprocessing, we're going to assume this for the rest of time. I would just say at some point, reflect on this. We have to center the data, that's because we're going to look through lines that go through the origin. We want linear subspace. So that makes sense that we would want our data to have a chance for these lines to explain the maximal variation. That's why we center. That's important. We will also re-scale the components so that they have about the same amount of space. So imagine if one of your components was miles per gallon and another one was feet per gallon. The feet per gallon because it was 5,000 times-- the numbers were they would be at vastly different scales, right? And if you go through the calculations below because you're taking squares and doing things, because they're at different scales, that makes them unreasonably important, right? So one of the things that you'll typically do is then scale by the variance in each direction and then allow them to all be spread about a Gaussian because they're centered component-wise and then divided by that variance. That's just the other piece of preprocessing that goes on, OK? All right. So now we've done that. We've done those pieces. We need a couple of bits of mathematics just to remind yourself of what it means to find these kind of components. So here, we have two components, u1 and u2. And they're unit vectors, as we talked about before. And they're orthogonal. That means their dot product is equal to zero, they're perpendicular to one another. Now what we want to find, one of the things that we have to find is kind of a subroutine intellectually of what we have to do is, to find the coordinate alpha 1 on this line, it makes sense that it's the closest point actually on this line to X. That would be the point that we would project it on. That's what projection means, actually. It's the closest point on the line in this sense in Euclidean geometry. So what is this line? This is the set of all T times u1. So that's any scalar, positive or negative, that was pointed out, that scales this entire line. So basically, our optimization problem to find this component, alpha 1 is to find the T that gets me to this closest point. Now geometrically, it will hopefully be intuitive that should be perpendicular to this line. This line here will be perpendicular. And the reason is the rest will be explained by U2. So let's find out how we find the closest point to the plane. And I'm going to write it out now. So how do we do that? Just to remind ourselves. So how do we do that? So alpha 1 is going to be equal to the ArgMin, this is just rewriting what I said, over all alpha, such that we have here, x minus alpha u1 square. The square just makes our life a little bit easier mathematically. Squares of norms are just easier to work with, doesn't really make any difference in the underlying piece. But this is just saying among all the-- oops-- among all the alphas-- among all the alphas, which are running up and down this line, I want the one that has the closest distance to x. That's what I'm calling the projection. So what does that equal? Well, this is the same as doing the ArgMin-- let me write it on the next line actually. The ArgMin-- and I'm just going to expand this norm out. So it's x squared plus alpha square, u1 square minus 2 alpha x dot u1, OK? And this is-- I'm just writing a dot product in a notation. Hopefully not too confusing, this is just the dot product. Just to make it clear without having little tiny dots to look at. So a couple of things right away. This term is one because it's a unit vector. It's a unit vector. And this term is irrelevant for alpha. Alpha's value here doesn't change the value of x. x is given. So this is equivalent for us when we take the derivative-- we take the derivative with respect to alpha of this expression. This expression looks like to us, alpha squared minus 2 alpha x, u1. And when we take the derivative with respect to alpha of this thing, this is derivative of respect to alpha of this expression, that equals 2 times alpha, which is from the first one, minus x, u1. Now to set this equal to This implies alpha equals x dotted into u1. OK. That's all that's saying. So this is just saying something you may have forgotten from linear algebra or you're now remembering, which is that the dot product of a unit vector is actually a projection. So far, so good? All right. Now one piece here is that we can generalize to higher dimensions. Do more components. And it's worth actually thinking about what this looks like, right? So the point is when we write this down, we're going to have here u1 to uk, for some value of k, this is just for an exercise for arbitrary value of k, element of rd. Some x that's also living in rd. And then here, we're going to calculate coordinates, alpha sum k equals 1 to d, alpha k, uk is smallest. Clear enough? This is finding the closest point in the subspace. Instead of a line, we're finding the closest point in the subspace. Hopefully, clear and you remember this. If not, please ask a question. Super easy to explain. So by the same basic reasoning, you compute the derivative. You unpack it. You have to use the fact that the uik's are orthogonal. Why? When you expand the squares, right, you're going to get products of uk dot into uj and those will cancel out. So you'll basically have just a bunch of expressions that are present in which alpha k is going to be X minus uk. And this is only because they're orthogonal. Only because orthogonal. uk are orthogonal. And you can do that derivative very, very, very quickly. Now this quantity here is important. That's a terrible highlight. Let's use this one. This thing here is called the residual. All right. This is the residual. And what we care about when we're going to do PCA is, we either want to find-- we want to find a set of points, a set of directions, such that when we do this projection onto them, the sum of all the residuals is minimized. OK? So this tells us the residual for a given point in a given basis. And now I have an optimization problem, right? And I'll write it out formally in a second, but intuitively, you can think about-- what's going to happen is I'm going to pick K of these U's that are going to be orthogonal. There are many of them that I could pick, many orthogonal bases that I could pick. I pick one of them. Then I project all of my data onto that set. I measure how well I did by either how much I captured in the data set or by how much was missing, which is the residual. And this is the residual. I then have that per point. So I sum up over all of those. And this gives me a score of how good that basis was. And among all the bases, I want to pick the one that minimizes the residual or maximizes the projected subspace. So let me write some of that down. And then you ask questions if you like. So we can find PCA by two things. By the way, this is not-- this seems trivial, potentially. That the maximizing the projected space or variance and the minimizing distance are the same. It's actually not true in general for other geometries. So it's actually quite a nice thing about Euclidean space. That doesn't matter to you, but just a comment. Minimize residual. So let's do this one. This is the one we're going to do in class. Maximize the projected space. So let's do it for one vector. So now we want to pick among all the possible u's, what's the version-- what's the particular setting of u that explains the most about our data? So what that's going to be from our previous discussion is this. Max u over Rd subject to u equals 1. average over all the points. Although of course, such a constant doesn't really matter because we're maximizing. But it's nice to have the right scale. u dot xy squared. So what we're saying here, we pick a direction and this direction we want to get the largest dot product. How much-- this is x1 projected into this. This is the alphas. The sum of those alpha squared I's, we want to be as big as possible. So we want to pick the direction among all the directions. So imagine and in 2D, you're just kind of spinning around and you're judging how great the subspace is by maximizing how much is present. We need a couple of facts to solve this. Hopefully, the point is clear. We need some facts to solve this. OK. So first fact we need to recall is, let A be a symmetric and square matrix, which kind of makes sense. Then it's a normal matrix. And in particular, it can be written like this. U lambda U transpose, where UUT equals I, the basis is orthogonal. And lambda is diagonal. Not all matrices are diagonal. Not all matrices are orthogonally diagonal. But if it's symmetric and square, it's called normal then it has this. And lambda has a nice interpretation. Lambda ii, since it's a diagonal matrix, equals lambda i. And we call these the eigenvalues. Lambda 1, and they're real by convention. So we order them this way just because it's nice to talk about them as lambda 1 being the big one, lambda N being the small one. OK? Now if you don't remember your linear algebra, maybe this doesn't seem mysterious to you. But if you think about the underlying model, there's no reason in general that these things should even be real valued. They could be complex valued in general. But if they're symmetric and it's nice, then this happens. They're real and you can order them, which is really nice. So we're going to use that fact. And if that's confusing to you to remember what happens when you diagonalize over the complex plane, don't worry about it at all. Just take this as a fact. OK? So these characters here, as I mentioned, these are the eigenvalues. All right. So recall if X equals sum K equals 1 to n, alpha K, UK, where U1, UN equals U. We can write the following thing. We can write alpha X-- AX, sorry, is equal to U lambda U transpose X. This is equal to U lambda sum K equals 1 to N, alpha K, EK-- oh, sorry. EK-- sorry, let me write it like this. Yeah, I do want to write it that way. OK. Perfect. Alpha K, EK, what's gone on here, U transpose U is exactly the-- so all that's going on here is when I dot product UK into one of these, which one survives? Exactly UK, right? If it's different, if it's UJ that's different, then they're going to contribute 0. So this becomes alpha K in the standard basis. This is a standard basis. That's confusing, ask a question. What's going on here, again, X is written in this form. What happens when I multiply U transpose by X? It multiplies it by each of the UI's, only one of them survives. For the K-th term, only the Kth one survives because otherwise, they would be 0. So I get this. Then when I multiply it by the diagonal matrix, I get U times sum, K equals 1 to N, lambda K, alpha K, EK. And then I get back to sum K equals 1 to N, I multiply by U again. I get lambda K, alpha K, UK. Because again, when I multiply EK, the basis vector, this is the vector where it only has a 1 in the Kth position. By U, it selects out UK and I get back. So what this means is, this is a long winded way of saying, if I multiply A in this basis, all it does is act by multiplying by the eigenvectors. This is an eigendecomposition, if you remember it. That's all that's going on. Please? How do we [INAUDIBLE] U-- like, how do we generate a [INAUDIBLE] so that we get all of these lines properly where we can get all of the U's? Because once we have the U's, then you can use-- that is residual minimizing polynomial here. Yeah, so I think-- we haven't gotten back to PCA yet. We need one more fact. Hold on just one minute and we'll come back to that. We're just recalling facts from your linear algebra class. But it's a great point. You should be thinking exactly that, right? So let me just point out one fact here, which is that if I take the max over all the unit vectors of X transpose X, A, it can also be written-- and this is the formula we've been kind of hinting at-- as alpha square, sum K equals 1 to N, alpha K squared, lambda K. OK? So now if you think about this, how do you find the principal eigenvalue? How do you find the largest way to express this? Well, since we only get to spend-- since we only get to spend our alpha squared along the components and we only have one unit to spend, what maximizes this expression? Well, we want to put as much value-- go ahead. Where do we get the alpha squared? Because when we multiply here by X, alpha times X is going to be equal to this expression. And then when we dot X in again, we get the alpha squares because they pair up to one another. I apologize for going too fast. Great question. So we have the alpha squares times the lambda K's. Now the lambda K's, because let's imagine they're 1 to 10, where would you put all your mass if you wanted to maximize this? Well, on the largest one, right? So how do you maximize the amount of mass you put on the largest one? You put in our notation, alpha 1 equals 1. Because the rest are all then equal to 0, this is going to be the largest one. And that's the principal eigenvector. Imagine they're strictly different. Does that make sense? Please. [INAUDIBLE] hard lined, when you multiply, did you do with that summation many X? Yeah. Where, here? [INAUDIBLE] you have to do [INAUDIBLE].. This thing? Yeah. So how does it suddenly become [INAUDIBLE].. This one is just the fact that it's diagonal. So EK times a diagonal is just the Kth unit vector times that value. So that pulls out lambda K, which is on the diagonal. And then it will give you just [INAUDIBLE] do summation form of UK? Yeah. And then when you have it-- when you have a unit vector multiplied by a matrix, it just selects out that column. Yeah. Awesome. But is this clear? If I want to maximize this, I set alpha 1 equal 1. Because we know lambda 1. Now what if lambda That is, there are two eigenvalues that are present. Lambda 1 equals lambda 2. Then it turns out there's an entire subspace of solutions. I could pick any alpha 1 and alpha 2, such that they're squared sum to 1, anywhere on that circle. And if lambda 1 equal lambda 2 equal lambda 3, now I can pick anywhere in a subspace of size 3. That's going to be important when we think about how well defined PCA is because it's only well defined what the principal component of variation is if lambda 1 is strictly bigger than lambda 2. If there's no gap, then the coordinates aren't well defined anymore. I can pick anything I just described. Does that make sense? Please. Why is alpha equal to one? So alpha 1 is equal to 1. And the reason is-- so if this constraint, this norm constraint means that I have to pick among all the alpha, such that their squares sum to 1. So if among all the ones that square sum to 1, which one's going to give me the biggest value? Intuitively, I want to put all my mass on lambda 1 because lambda 1 is the biggest value, right? So that setting, I would set alpha 1 equal to 1 and all the alpha K equal to 0 for K greater than 1. That would be the value that I would pick there because that one's guaranteed. I can't really do better than that. If I slide off even an epsilon amount of mass, well then it's going to a smaller eigenvalue and because I lost that mass, I would get epsilon squared times that. You can also just compute the derivative using Lagrangian if the intuitive thing doesn't make sense, which we did two lectures ago. Cool. So let's go back to PCA and say, exactly where do we get this UI? The thing I wanted to point out that is here, I'm going to come back to this point about what happens with lambda 1 and lambda 2. If you missed it, don't worry. I just care that you're aware what we're doing in the maximization. Now back to PCA. So recall, I'll just go up here. Sorry, I'll copy because I'm extremely lazy. Where is our PCA? Where did we do this? Oh, here. I just want to make sure you realize I'm not like doing something strange and changing the expression. This is what we wanted to deal with. Well, this expression here, we can rewrite. And we can rewrite it as XI transpose, U transpose-- oh, sorry. X transpose U. U transpose, XI. Sum, I goes from 1 to N. 1 to N. Let me drag that and give myself a little bit more space here so it's not crowding you too much. So this is equal. These two expressions are equal. I'm just expanding out the square. But this is pretty nice because now I can pull out U-- oh, sorry. I wrote it backwards. I didn't want to do this. Let me write it the other way. That'll just make my life easier in the next move. U2-- I'm so sorry about that, X transpose U. Sorry, that was foolish. It's true, but foolish. OK. So now I can pull out the U's because they're on the outside. This is U transpose sum I equals Why is that? Because this is linear. So I can pull this out of the sum. And all of these things are paired up. This thing here, because this is a quantity that you may remember previously. This is the covariance of your data. Why is it the covariance? Because you subtract it off the mean. This is the sample covariance. And so here, I can push the 1 by N inside. We do that to 1 by N. And then it's really the sample covariance. This thing here. So then what will U be if I want to maximize this U? Well, it would be the principal eigenvalue. This is an eigenvalue problem, just as we went up here, it's now of this form. We can verify, is it symmetric? Yeah. It's a symmetric. It's a sum of-- these are each are a sum of symmetric matrices. So the covariance is symmetric. It is actually really a covariance matrix. It also happens to be positive semi-definite. That means all the lambdas are non-negative. So that's good news. We didn't need that, but that's nice to have. And now when we look at the U's as we go through there, we have to pick a direction. Which direction do we pick? Well, we're going to pick the direction that corresponds to alpha 1, which is U1, right, which is the basis when we do the decomposition here, it's the principal eigenvector of this thing. So the best U to pick is the principal eigenvector of the covariance. Some of you are nodding. Some of you look like I said something horrible. So please, both ask questions. Is this the A matrix that I was asking about? Yeah. This is a covariance. We're going to sub this covariance matrix into the A. Yeah. Awesome. What happens if we want two components? What do you think we pick? Anyone? The first two. The first two that correspond to the largest two eigenvectors. Three, the largest three. If the first two are equal and you want one direction, lambda 1 equals lambda because you have two potential representations for it. But you can still pick A component of principal variation. And that's the way that PCA is potentially undefined. Cool. That's all it is. So how do we represent data with this, just to make sure it's clear? Well, we map XI to sum-- let's say we picked D-dimensions. XI minus Uj, Uj. Namely, this is its coordinates in the top K eigenvalues that we picked. On average, this captures the most variance in our data. And we just keep-- this is a scalar. This is what we were calling alpha j earlier. We just keep these K scalars. So this map, what it does, it takes in 1,000 dimensions. Let's say, I started with 1,000 dimensions. Then it's going to pick five. Which five? Well, they're going to be a blend of those. They're not going to be any individual dimension. They're going to be these eigenvectors of the underlying covariance. They captured the most-- the reason we like them is they capture the most of our data. They throw away the least amount of our data, which is the other residual interpretation. And this gives us a map that goes from R, let's call it big D, down to R little d, OK? And that's in what sense, this is a dimensionality reduction. And we can use that, for example, to take our data and take it from 100 dimensions and project it onto two and visualize it, draw it on a map. Yeah, awesome. Please. Before we get [INAUDIBLE] on, [INAUDIBLE] can you give an estimate of what the small D will be here? Yeah, great question. So you want to know how do you pick K or D here? Little D. So here in contrast, for the last three lectures, I've been telling you, "I have no idea how to pick K." Here, I at least have an idea. Oh, go ahead. Oh. I'm just confused it's, like, we just mapped everything into a U [INAUDIBLE]. I think I confused you with something. So let me make sure. Let me just call this K, just for now to make this. K is less than what you're thinking of as D. How about that? Yeah. Yeah. So basically, we map everything into a subspace, but this subspace is still D-dimensional because of the lengths of the vector, D is to-- No. That's a great point. So it's decomposed into D-dimensional vectors, but it only now takes K coordinates to represent it. So again, going back to the two dimensional picture. We have a two dimensional picture, but we projected everything onto the line. So now we can represent things by just its distance on the line, the alpha 1. So we've taken 2 scalars, which are, like, it's x and y-coordinates, and reduced it to just one, which is the alpha 1. Does that make sense? Makes sense. Awesome. So let's see how we pick K here. How do we pick K? And this won't be super satisfying, but you know, whatever. It'll be fine. OK. So this actually does have some kind of trick. So what does this actually mean? So this is basically looking at the trace of A, which is equal to this sum on the bottom, lambda i. So if you remember your linear algebra. But basically what it's saying is, what I want to know is how much of the space am I explaining? So imagine I have Intuitively, if I get the top 10 of them, the worst case is that they explain .1 of my space. Meaning that my sum of those eigenvalues are about 0.1 and I'm throwing away roughly 90% of my data. It's not what it means, but just kind of think intuitively. What this is saying is that traditionally, people will pick K here so that they explain a lot of their data. So does your data, like, as a rough guide, if I pick 10 components of your data and do PCA on it, do I capture 90%, 95%? Then that means that was a good selection. And you can now compare K's based on how their eigenvalues are ordered. If it goes down to your fifth component is 10 to the minus 10, you don't need it. Pick the fourth instead. Just pick bases of four. So now it gives you a way to actually start to compare them. And you can get error guarantees. You may worry computing these eigenvalues is super expensive because you have to compute like an SVD on some mega matrix. But you don't have to. And the reason is this is a trace. And if you remember your trace qualities, you can just sum the values on the diagonal to get the trace, which is the sum of the eigenvalues. Please. So if you get a K that was like too large, and say that this quotient came out to be very, very high, like 0.999-- Right. Would you say that you're overfitting? Yeah, exactly right. It's like it's a thing you're overfitting. You don't need to do it. So it's kind of like if you're keeping those extra pieces of information, intuitively, you want the smallest K that you have most of your data. So you can tack on more, but it's like, what's the additional value to do it? Traditionally, you use K, as I said, PCA kind of as a visualization technique or to get some rough sense of the data. And so you try not to have K higher than 2 or 3. And it just provides a sanity check. Like, if you run PCA and the first two components of your eigenvalues account for almost nothing in your data, then it's not really clear what conclusions you can draw, right? The other place where it can cause you pain is the thing that I keep illustrating, which is that if lambda 1 and lambda 2 are equal to each other, which on real data is actually very unlikely, but numerical issues could make them come together, then the coordinates could be pretty fragile, right? So I run PCA once. And I get 1, 2, 3 as my-- sorry, alpha 1, alpha 2, alpha 3. But then I run it the next time and because I chopped at 3, and the first four were equal, some new coefficient comes in, right? So really, those are the instabilities that you worry about the most. If your lambdas, or a bunch of lambdas are really, really close to each other, then your coordinates aren't really preserved and anything inside the subspace goes. So that's the other problem with PCA. It really only makes sense when the spectrum is separated. And people don't usually check that. And as a result, they lead to erroneous conclusions. So you tack on too many you get there. And then also you can have these issues about non-fidelity in the representation. Those are the two main ones. You got it perfectly. OK. Awesome. Any other questions about PCA? Now you know it, you love it so much. [CHUCKLE] All right. Great. All right. Let's talk about ICA. So ICA sounds very similar to PCA. We only change one letter, but it has nothing to do with it. One has nothing to do with the other. So that's refreshing. But they have something about them that I really like. All right. So first, I'm going to tell you the high level story of ICA, which is this cocktail party story. High level story. Then some key facts. These key facts are useful because you will run into them in your homework. And I just want to highlight them. And then we'll talk about the model. And the model will be the least interesting part. So here's how it works. Here's the high level story. We have people-- and by the way, I think the homework is the world's most boring cocktail party. I think they count numbers to 10, or something. So it's not like you're going to hear some salacious gossip. You're just going to hear people counting to 10. Some TA from like five years ago. All right? So anyway, you have people. Here are two people. They're happy. Those are people 1 and 2. We have microphones over here, mic 1, mic 2, OK? Now here's our problem. When the people speak, they don't speak directly into one microphone. They're just ambient microphones. And so what happens when this person compresses the sound waves is that, boom, it hits microphone one. But also it hits microphone two. Similarly, when person two speaks, they speak in some way. This is supposed to be the same wave, by the way. So the wave didn't change based on them, but it just took longer to get to the microphone. We're not going to worry about length too much. But the point is, what microphone one and microphone see is a mixture of the sounds. They don't just hear one person, they hear two speakers, simultaneously, superimposed on one another. And the goal is to take what we record at these microphones and recover kind of the time series of what they actually said. And since if we had the wave, that's just the air we could play it through a speaker and we would actually hear what person one and person two said. And you'll be able to do this. And it works, which is kind of wild. Now I do want to emphasize having built things that look sort of like this, is a naive way to look at the problem in the sense of what you would build if you were doing this in industry, but so what? It gets you the core principles. And you can look, there's whole things about speaker identification. And in fact, there are certain companies that when they ship their products, they brag about how they can identify different speakers in the home. So as weird as this is, people ship products based on their ability to do this. OK? All right. Please ask questions if the setup doesn't make sense. We have data X. And we'll see it for some time. We're not going to assume things. We'll make this simple. The people aren't moving around in the room. They're not changing. Just to make our lives a little easier. So we need to look at what is the actual data look like? So as I mentioned, speaker one is going to be this time series that looks like this. OK? Now we don't actually see the whole continuous thing. What we see is we see speaker one. We don't even see this, by the way. This is-- we get samples at various regular intervals. That's the way we conceptualize the problem. And this is like how audio processing works, right? Two at time 1. Let me just draw this and then I'll talk. These should be evenly spaced, if I could draw properly and had enough patience. The point is, is what you see is that you just get these measures of intensity. So STJ is speaker J, intensity at time t. OK. Now we want to recover this. If we had this, we're in good shape. If you think about how you actually record audio, kind of high end audio is usually recorded around 44 kilohertz, give or take. You can probably understand people at much lower kilohertz. I don't know exactly where it breaks down. But my point is, this is actually how digital recording works. You sample it a bunch of points, you get the intensities. And then you play it back through a speaker. All right. And then there speaker two, which we also draw. And then just so it's clear, we sample them at a bunch of points too. Maybe speaker 2, more loquacious, I don't know. S, 1, 2. Oops, those look terrible. I don't know why I'm going to fix this, but I am. And these are sampled at the same time points. So my drawing doesn't do-- is imperfect in many, many ways. S, T2, so on. Blah, blah, blah. And these are going to be sampled at the same time points, just to make our lives easier. So now we don't get to see this, as I mentioned. We don't see S1 and S2's time series. We get to see the microphones. Only XT1 and XT2, which are sampled at the microphones. So far, so good? So we need a model of how we're going to do this. And of course, our model from what we described above is going to look something like this. What we observe at microphone j, at time, T, is a mixture of something from speaker 1 and something from speaker 2. OK? So microphone j just to make sure it's clear here, Mic j sees a mixture. And we're going to assume this mixture is fixed just for the moment, right? If they're moving around the room, that's not true any longer. But we're going to assume it's fixed for the moment. OK? So I can write this compactly as, X of t equals A of S of t, right? Now what do we know here? This, we observe. This is the data. Both of these are latent. We don't know the mixture. We don't know the speaker intensities. OK? For the moment, I'm going to assume that the number of speakers and the number of microphones are the same. You can imagine because there's a matrix here, that if I have only one microphone, this is going to be substantially harder, actually impossible, to do the reconstruction. But I'm going to take advantage of the fact that I have different mixtures at these microphones. OK? So assume I have a number of microphones the same as the number of speakers. OK? So is the set up, the high level story and set up clear, intuitively, what should happen? All right. Let me write some math. [INAUDIBLE] Sure. What would A [INAUDIBLE] do? A is actually the mixture. So A, right here, this model says that what we see at time t from at microphone j is some fixed mixture of what speaker and speaker 2 said at time t. We're not modeling any delay or anything like that. So they make the sound, they go, "ah", and then it hits the microphone. And then the mixture of person one and person two's pressure hits the microphone at the same time. And what do we [INAUDIBLE]? They don't actually matter for now. The physical units don't matter in any way. But you can think about them as any unit of pressure that you want. A is unit-less. A is just a pure mixture. You can multiply and add things because the S's are the same type. Cool. All right. So let's make this a little bit more mathematically precise. So we're given X1, X, n, element of Rd. And d is the number of mics and speakers. What we have to do is find S1 to Sn that's also an element of Rd. So I'm preferring to have the notation over time. We also are going to find, although we really don't need it for the model, this A that is D by D. Such that, x of t equals AS of t. Now if we estimated A, right, there's a pretty easy way to find what we wanted. If we knew A, someone gave it to us, this problem is trivial. Just take the X's, multiply by A inverse, and you have yourself the S's, right? Now the terminology is we call A the mixing matrix for the reasons I just outlined. A, the mixing matrix. And W equals A inverse, which will use the un-mixing matrix. So why would I introduce and bother you with this? It turns out that W is actually the right way to write a lot of the guarantees. And you'll see why in a second. OK? So we're going to write this as W is equal to W1 transpose to Wd transpose. And I'm just doing this so that I can write the following. So that-- this is just one way of writing the inverse-- so that Sjt equals Wj times Xt. Nothing happened here. I'm just giving you notation of how I think about the mixing and then this inverse is going to be important. The inverse is kind of obviously important because we want it-- if we the A, we would multiply by its inverse on both sides and that would tell us the speaker that we were after. So W is what we need to multiply by. Now the things that I actually find interesting about this model. So some caveats, we talked about some of these. A does not vary with time. We're assuming that. If it did, we'd have to use something more complicated. So we're not going to do that. Two, this is more interesting to me. There is inherent ambiguity in this model. I like when there's inherent ambiguity when you can't tell two things apart because it forces you to understand what the model is doing. So one thing. We can't determine speaker one versus speaker two. We have no idea who speaker one and who speaker two in real life. So we can only determine up to how we promote their time series. So speaker ID is opaque to us. Maybe one time we run the algorithm are indistinguishable. We don't know the labels. Of course, we can tell that there's one person who is saying numbers in English and one who's saying numbers in French. We just can't tell who's doing what in this model. The other thing, which is maybe a little bit more subtle is, we can't determine absolute intensity. And I'll just write the equation. And it's because we're multiplying two things together. So notice that if I take any constant and multiply it times A and then take that same constant and multiply it times S, this is still equal to A of S of t. So we can possibly only know that the person-- we can't tell how loud S1 and S2 are. We can tell relatively how loud they are. But the mixing matrix, we can multiply by a constant and it wouldn't change anything. The scaling would go through. And because we're multiplying them in our framework, we can't determine this either. Now that has a surprising, surprising thing. And this is kind of why I like to teach it, intellectually. Surprising. The speakers cannot be drawn from a Gaussian distribution. We're going to have to make some statistical assumption, but they cannot be Gaussian. Why? Suppose they were. Xi. would then be a normal drawn with some mean from AAt. But then as we saw before, if UTU equals I, then AU generates the same data. What does that mean? It means that any rotation of A generates the same data, multiplied by U. And the reason it happens is because the data here are rotationally invariant because their covariance matrix is A times A transpose. And that is not sensitive enough to tell about all these rotations. The same reason we loved Gaussians because they were rotationally invariant means that symmetry, we can't recover anything in this problem. And so at that point, you may think, "gosh, there's no way we're going to be able to do this. We have all these symmetries around the speaker and we can rotate the intensities in any way we want. What are we going to be able to do?" And it turns out, you can recover something here. And weirdly enough, as long as the distribution, roughly speaking, is not Gaussian and is not rotationally invariant, you can recover it. And that's kind of remarkable and it's worth thinking about. OK? Now the algorithm is going to be so trivial. The algorithm is just going to be gradient descent and MLE. We're just going to set up a likelihood function and run everything that we've been running so far. That's it. Now we need one trick, one half second, two minute detour because this causes problems every time people look at this. It's just one little detour about how random variables behave under linear transformations. Under linear transform. All right. And this is just-- the reason I'm doing this is it's a key confusion. If you remember you're-- I don't know-- I'm not going to say what you should remember, some calculus thing, just basic change of variable formulas for integrals, this will make sense. But we can draw it in pictures and you don't need to know that at all. So here we go. So just imagine I have something that's uniform on 0, 1. And now I have a new variable, U, which is equal to is the PDF of U in terms of S? Now we're tempted to write P of U, X over 2 equals P of S of X. Now, let's take a look at the PDFs. So here's the PDF of U. It goes from 0 to 1. And what's its height? Well, we integrate over the entire thing, it's 1, right? This is our PDF of U-- sorry, of S. Sorry, PDF of S. So this is the uniform. Now for U, what happens? We go from 0, here's 1, to 2. We know we have support from 0 to 2 because it's a uniform distribution, right? You grab a point here, you multiply it by 2 and it's going to sit somewhere in here. It's going to sit somewhere in here. But what is its height? It's got to be 1/2. So this relationship is clear. So PS of X is going to be equal to 1 if X element of 0, 1, it's 0 otherwise. PU of X is going to be equal to PS of X over 2 times 1/2. And that's just the normalizing constant. So this key issue here is this normalizing constant. Normalizing constant. That's it. So when we do this in higher dimensions, we have PU of X and we want to map by some linear, A. So that is like AU, for example. What happens? I'm sorry-- AS, that's the way we're writing this because I want to keep the notation. AS equals U. What happens in higher dimensions? Well, we still have PU of X is equal to PS of A inverse X. But we need a normalizing constant here. And how does it take-- if you imagine, I'm taking a box and I multiply it by A, a matrix A, how does the volume change? That's the determinant. That's all the determinant does. It take a box and the volume of it is going to be exactly proportional to the assigned-- the absolute value of the determinate. The determinate is signed because it's a oriented measure, but that's what you get. You can convince yourself in two dimensions pretty easily. In higher dimensions, it actually requires a little bit of work potentially to do it. And this you probably learned as your change of integrals-- change of volume integrals formula at some point. Times the determinant of W. Now the thing that I used here was the fact that 1 over the determinant of A is equal to the determinant of A inverse when I did that. OK? So the point is, when you do change your variables, you have to take this determinant into account. You probably did this with a Jacobian at some point in your life. And if you didn't, don't worry about it. It's not that big a deal. It just says if I take a box and I map it by a bilinear transformation, what's the volume of the box? Going back to this uniform case. And then if you care about how you would probably prove this, you just think about integrals. You break it into tiny little boxes. That's it. Whatever space you're integrating. We did it for one box, but you can do it for many. OK. So we're going to use this formula for the rest of our time. Please. [INAUDIBLE] Q is equal to PSX-- PSX. And how does this actually relate to U equals 2S because are we using U and S as a proctor, and-- Yeah. So the idea here is that we have a probability distribution in one space. Now I want a probability distribution in my new space. So I have from 0 to 2. So I'm going to take the X and I'm going to divide it by 2. And whatever the height is over there, I'm going to get it. So if the height is 1, I get it. The height is 0, then that should be right as well. But when I do that, the problem is the probability distribution I had, I'm using that value, will be 1 when I do that. And I need to just multiply by some constant. So I'm thinking here about the PDF for that random variable. So basically, U and S here is a [INAUDIBLE] of random variable [INAUDIBLE]. It's a random variable. So a random variable is nothing more than like some function on the-- it's just an integral. OK. Awesome question. Cool. So once we have this fact, this problem is super easy. ICA is MLE. Why is that? P of S equals sum j equals This is where we use the sources are independent. We have to assume this in the model. And they have some distribution that's not Gaussian, but not Gaussian. This equals-- so then P of X is now equal to the probability, J goes from 1 to D of PS W times X times the determinant of W. But now this is something that we can compute. This is written in terms of X and our matrix, A. And we can just do gradient descent on it. So how does that work? Here's the key technical bit. We're going to set P sub X proportional to G prime X for G of X equal to 1 plus E to the minus X inverse. There's nothing really magical about this function except for it's not rotationally invariant. And then we can solve the likelihood of W is sum from time goes from 1 to N, sum, j goes from 1 to D, log G plus Wj Xt plus log determinant of W. So maybe this looks pretty intimidating, but what happened here? This G is just some likelihood function. I don't actually care what it is. That's the thing that's weird. That's the thing I want you to confront. It's not that it's some specially chosen function. When we would pick the Gaussians, we were picking it because of computational and other reasons. Oh, it only had two moments and we could compute everything we wanted. This is basically saying, honestly, I can pick almost any G I want. Any likelihood function I want on the speakers. As long as it's not rotationally-- as long as the measure isn't rotation invariant, there will be a unique solution if I look at enough data. That's kind of fun. That's kind of an interesting thing to think about. And you say, "what G do I pick?" Well, I'm going to pick this one because I know it's not rotationally invariant. People pick other ones. How do you pick a measure? Well, you pick the PDF proportional to something that looks like a CDF. What does this function look like? It's just the sigmoid function. So I picked the probability distribution. So it's kind of low to high. It's not rotationally invariant on each component. And I'm done. And that's kind of wild, OK? I'm not super worried that you grok absolutely everything here. Again, these lectures are not supposed to walk you through line by line and read the book to you. What it's supposed to do is give you a high level structure for how this algorithm is going to go and what are the key twists. Now the one thing that's probably scary to you is you're like, "well, I don't know how to compute the gradient of the log of a determinant." And weirdly enough, that actually has a form. So that is actually-- that's an object that you can compute and compute gradient descent on. So you just run gradient descent on W. You have to do some derivatives in the old days. I guess now you don't have to compute derivatives anymore, you have autodiff software for you, which will kind of do it automatically. Like, PyTorch will do this for you or JAX or something. But maybe we make you do it by hand. I don't know if we actually do that. But if we do, you can do it. It's not bad. You look it up or you derive it. It's not super hard. But it is weird that it works, OK? And miraculously, I'm not going to have to convince you of this in class. You run the thing and you will hear someone saying, "one, two, three, four, five," some TA. And it will work, even though what's at the microphones is a mix. It's pretty wild. All right. Well, good. Oh, by the way, so what is the log of the absolute value the determinant? It's the determinant is the product of all the eigenvalues. So if you take their absolute value and take their log, it's going to be the absolute values of the sums of the eigenvalues. Now that looks a little trace-like. So you can imagine why this is actually relatively easy to compute. But your W is small. You can brute force it for these kinds of things. OK. Awesome. Any questions about this? Please. [INAUDIBLE] Can you just repeat what is the piece of-- Oh, yeah, let's-- [INAUDIBLE] This piece right here? Yeah, so what we're doing is we have P of the speakers, each one of them, some probability distribution. I'm being vague at this point when I was writing it because I wanted to basically get to the point. We can use any distribution. That's the weird thing. Basically, any distribution so long as it's not rotationally invariant. So I don't know what PS is yet, but just imagine it. Now I'm going to move it to a distribution from SJ to a distribution on X's. So the speakers, I can't observe. I don't get to measure them, but I do get to measure the X's. Now I only have one variable that controls everything, that's W. Now I can do gradient descent on W. I didn't know how to do it on their product before, of setting S and W as a product, but because there's only one now, I can run gradient descent on it. And it happens to be nicely concave and all the rest of the stuff I want. So you went from one distribution to another using this data, we'll call [INAUDIBLE] using data [INAUDIBLE].. Exactly right. And the thing that's weird here that I want you to confront is-- and when you do the assignment is, like, if you use the standard distributions, you don't get a unique answer. And weirdly enough, almost any distribution you use will work. We happen to pick this one, but you could have picked something else. And that's kind of wild. So that's a weird thing. Just have to be able to distinguish it. And what you think about there is, the prior on the speaker doesn't matter too much. They just can't mix in some awkward way. You can't rotate the speaker intensities and have it make sense. You have to be able to distinguish up versus down. Fantastic. Other questions? So I'm going to spend-- oh, please. [INAUDIBLE] About this G, yeah, like, how exactly do you get-- like, how do you get the G of X? This one is directly proportional-- Yeah, you don't-- so I can take any function I want and normalize it as long as it has some variation that's smooth and other things, and normalize it from like -1 to 1, or minus infinity to infinity, as long as it's integrable and make it into a probability distribution, as long as it's positive. Positive, this function is positive. I want it to go-- I want it to go from low to high so it has that nice feature to it and I don't care about anything else. But now that's the CDF because now I have a function which the integral of which is 0 to 1. And then what I'm going to do is I'm going to take a differential and that's what gives me the PDF. There's this probability theory stuff. But yeah. And you're like-- the thing that should bother you and is bothering you is why did you pick this function? And then I'm telling you, that's disturbing. Go pick other functions, still works. But there's one function that if we use, which we've been using the whole quarter, all of a sudden everything breaks. That's weird. It just means that not every prior matters. Sometimes priors get these undefinability things. And it's coming from this rotational invariance. And that's weird. It's going to bother you, but then you run the code and you're like, "oh, my God. It works." And you can play with it. See how close you can make it to rotation in the invariant, if you're motivated. Or you just do the homework and turn it in. Either way. But that's what's interesting. Cool. I'm going to use our last little bit of time together to tell you something kind of different, which was sometimes about weak supervision. This would be less mathematical and potentially, have more bizarre pictures. We didn't get any activity on the thread. I want to tell you about one trend since this is our last lecture together. Tengyu will take over. And we'll see if we get through this. All right. OK. So I want to tell you about weak supervision. And I want to just motivate it for you in the last 20 minutes and why we care about it. And mathematically, the reason we care about it is, it's an example-- the underlying model that will show you is an example of the EM style models, but we can solve it exactly in a lot of situations. These are my slides and I have to give these keynote-- I had to give a keynote this morning, I have two more this week, on random conferences. They have lots of weird pictures on them, just ignore them. OK? So let me motivate for you why I care about weak supervision and then I'm going to skip over some stuff and then we'll try and get to the mathematics of the model. So the reason we started thinking about weak supervision and others too, and this thing we call data-centric AI, was that, machine learning at least in the supervised setting has three pieces. It has a model, which you're learning a lot about in this course. It has training data, which we've talked about a whole bunch. And it has hardware. And the thing is, which is weird outside this class, and for a variety of good reasons, models have become commodities. So all of you in this room can go download Google, Microsoft Stanford's latest models, just the culture of AI is that we put everything online. And there's been a ton of investment in that. So if you want a state of the art natural language model, you can download it. Go to Hugging Face. Download it. You want the state of the art vision model, there's a tutorial somewhere that will allow you to do it. And you can get it instantaneously. And because of the cloud, hardware is a big problem. It's not a big problem. People can get access to it. Still interesting problem, but it's there. But training data is hard. This was the original idea. And the reason training data is hard is it's the way that you encode your problem about the world. It's how you label it. Now when I tell you about supervised machine learning, the first day, we have X and Y pairs. And I never tell you where they come from. You have X and Y pairs. They came from god, herself. They landed there. And you're, like, well, this is where machine learning starts. But that's not at all how anything works in machine learning. It doesn't start then and it's not like hanging out with Beyonce, it's like living in sewer. So if you want to do this stuff in industry, it is much more like hanging out in a dirty, filthy sewer. You have all these data streams, you don't know where the heck they came from. They all have weird errors. They all have weird correlations with one another. And it's a huge mess. So we started to look at this problem a couple of years ago with many other folks in the field. And we just wanted to put mathematical and system structure around how to make this less awful. OK? That was the motivation, right? Now I'm not going to bore you with the fact that people actually do this and all the rest. It turns out that this is an interesting area, but I'm just going to show you a toy example. So here's a toy example and a little bit of light math, OK? So here's an example of something we may want to do and generate training data. And there's a whole longer talk here of how exactly this works. Let's say you want to do Named Entity Recognition. You want to label persons and hospitals in text. You just want to know. You have some mentions, are these mentions of people or text? Saint Francis, could be a person, could be the hospital. Bob Jones, probably a person, but I guess could be a hospital too. OK? So here's how these weak supervision frameworks work. They basically allow you to have these little voters that you write to reuse training data that you had before. An existing classifier that's already voting person, you just throw it in. Here's another kind of classification rule, uppercase existing classifier says person, you put that in. And then hospital. Well, I have a dictionary of hospital names, it's called distant supervision, so this one votes hospital. And the point is, is each one of those noisy sources in the sewer is telling you something different. And all you want to do is determine how likely is answer A versus answer B? So this is how it works in a particular engine. This is the first system that did it. There's many other since then. Don't worry too much about that. But what it did is it estimated this graphical model. And I'll show you how it works in just one second because it will look very familiar to you. Take in all those sources and you try to estimate how accurate is every source and how correlated is every source with one another? And then you make an estimate over all the labels for every point, how probable, how likely is it as a person, a hospital, or whatever. OK? That's what you want to do. So we model this as a generative process. There's the graphical model there. You've probably seen something like this. And the instance here is we want to learn the Y value that's there, the label that's there, without any hand labeled data. So we want to look at the votes and somehow deduce the Y. There are reasons you want to do this that are beyond this, but we're not going to have any label data when we do it. And we want to learn the correlations and the accuracies. OK. Awesome. So it's not at all obvious that you could do this. The reason it works is because you can see these voters on many data sources. Millions of data points, every point that comes in, every voter registers "yes", "no", or "indifferent". And you can look at their-- that they're overlapping judgments and estimate their accuracy and their correlation. If you knew, for example, source one was always right, you just count how often every source agreed with source one. And that would tell you the accuracy of source two and source three. OK? But they could be correlated, as we saw. That first function and second function called each other. They could be correlated in some way and you'll have to deduce that. So here's the way to solve these underlying problems. And I'm just going to go through this very quickly because we don't have all the math. But it basically looks like those covariance matrices that we saw from before. The problem here is that we want what's in the red. We want to know, how often does Y, which we don't see how often this thing-- let's make it 0 or 1 so it's true, we don't see how often Y is correlated with one source. We only see how often the sources are correlated with one another. So said another way, we can only observe what's in sigma sub O. We don't get to see how often the sources correlate with the ground truth. OK. Now here's the thing that is pretty interesting. It turns out that the inverse of this matrix has a structure. This is one of the most beautiful mathematical facts that comes from the graphical models course. And I'll just state it here without proof. It turns out that every time this graphical structure is missing an edge, there's a 0 in the covariance matrix. This allows us to write this in terms of the observed values and some rank one parameters. And these Zi's basically say, if they're one, they have perfect accuracy, and 0, they're total noise. Now why that's interesting is it lets us set up a bunch of these equations that basically say, we know the left hand side is 0 because let's imagine I know the structure, I know how they're correlated. You can solve for that if you wanted. And it turns out-- this will be kind of weird stuff-- it turns out that actually here, you can complete this as long as you have enough of these linear constraints. Now a couple of things here. First Z and minus Z are solutions. So this goes back to those symmetry questions I wanted to highlight. You can't tell if everybody's correlated with the ground truth or the exact opposite. You have to make an assumption. If you had a bunch of malicious labelers who are all telling you the exact wrong thing and correlated to each other, would get fooled. So you have to make some assumption there. The second thing is when Zi is equal to 0, that means it's a coin flip. If I added a label or I flipped a coin, that should give you no information. And indeed, it won't. It will zero out every constraint that it's in, which sanity checks. You can't make your job easier by adding these labelers. It turns out I won't bore you. There's a great paper from Virsheenan from like 2014-2015 that measures the notion of rank in a continuous way, it's called effective rank. And this tells you the right statistical scaling for this problem. The closer those Z's get to 0, when can you recover the matrix? Super fun stuff. It's a nice geometric piece. So you heard all this stuff. I just wanted to share it with you, it's stuff that I end up working on. You probably think at this point, "there's no way anyone would ever use these crazy covariance matrices." But I just want to share one thing with you, which is wild to me because my students did this. It's actually used in all these applications. It's still in production all these years later. And this is a way that people cope with the fact that they have lots of noisy training data and put them together. So it's in search ads, it's in YouTube, Gmail, and products from Apple. And so this weird framework of how you program training data is just something I wanted to share with you. And the underlying model is like classical from what you've seen, except for instead of solving these EM problems, you solve these weird covariance matrix inverse problems and you can solve them provably. And that turns out to be important for a variety of weird issues of estimating source quality. You don't want to rely on EM, which we talked about doesn't have a unique answer. For this, we can tell you exactly where the unique answer is. Thank you for your time and attention and Tengyu will see you on Wednesday. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Modelbased_RL_Value_function_approximator_I_2022_I_Lecture_20.txt | So I guess today we're going to do the last lecture on reinforcement learning. And I will probably spend like five minutes to briefly wrap up the whole course. But mostly, we're going to talk about reinforcement learning. So this is supposed to be a more kind of introductory lecture on a little more advanced topics in reinforcement learning. So I won't talk about a lot about details, but mostly, I'm going to define some terms so that it's easier for you to kind of either take another course on reinforcement learning or read some of the literatures yourself. So I guess, last time we have introduced the basic concept of MDP, and Markov decision process. That's the main language that people use to think about reinforcement questions. I'm going to start by just reviewing some of the key ideas. So recall that you have an MDP, Markov decision process. It's described by a few important concepts. So one thing is the state space. You have to specify the state space to specify the MDP. You have to specify the action space. And the MDP has this transition probability, which is called PSa for every s and every a is in the-- s is a state. a is the action. For every s and a you have this so-called transition probability matrix, transition probabilities, which is to describe what is the probability that if you start with State s and take action a what is the probability to arrive at a new state? And there is this so-called discount factor, gamma, and a reward function, R. So after specifying these five quantities, you get MDP. And we also talk about a concept of a policy. So a policy is something you are trying to learn from interacting with the system, the environment. You are trying to learn this so-called policy, which is a function that maps from the state and action. So this policy tells you what you do, which action do you take when you see a state S. So pi of S is the action you take when you are at the state s. And we also introduce these two concept, two type of value function. So the first type of value function is the value function of the state S. So this is the value of the state s under the policy pi. So this is the expected reward, expected future payoff of executing the policy pi from state s. So you keep taking action from the policy pi. You start with state s and then you compute what is the total future payoff in expectation. And that is V pi of s. And we also discuss this so-called V star of s. This is oblivious to the policy pi. This is asking what's the maximum possible reward you can get from starting from state s, right? So you maximize over all possible policies, and you look at-- you maximize the V pi of s. And the maximizer of this process, the arg max of this is the optimal policy you care about. You want to find out what is the optimal policy. So I think we probably didn't have time to define this formula last time. So the optimal policy pi star is the so-called the arg max pi V5 S. And this is actually-- there's a unique pi star that maximized the V pi s for every s just because the policy itself is already a function of s, right? So you're finding a policy that maps the function s to the action. You already can take different actions for different state s. And this is one way to define this, and another way to define this is the following. So this is another way to define optimal policies. You say that this is the greedy policy with respect to V star. So this is an alternative definition of the optimal policy, which is defined to be-- pi star of s is defined to be the best action you take such that you maximize your future payoff. What's your future payoff? The future payoff is equal to R of s plus the payoff you get from the future steps, which is gamma times this P s, a s prime. This is sum over s prime. This is the probability of s prime after take action a, which is the variable here. And then you times V star s prime, right? So this part is the expected reward, the best reward you can get after you take action a. You take action a, you have some chance to arrive at s prime, and you multiply the best payoff you can get after starting from s prime. And so that's why this is the best possible expected payoff after you take action a. And this is the payoff you get at state s. So the total thing is the best payoff, including the current return, the best future payoff if you take action a. And you maximize over a, and that's the best policy. That's the definition of the optimal policy, right, because this is already the optimal choice, you're already thinking about the optimal choice for all the future steps. And then you take the action a that is the optimal for this step, taking into account the future steps. And that's the optimal policy for the state s. Any questions? Sometimes this is the way to find out the optimal policy because if you find out what's the V star, then you can find out the optimal policy because you just take the greedy policy with respect to V star. And we also introduced this important concept of Bellman equation, which is the main tool that we use to find out, to compute V pi and V star. So for V pi, if you are given pi, then the Bellman equation for V pi is equal-- is this. So V pi of s is equal to Rs plus gamma times this, right? So the Bellman equations you can pretty much verify intuitively yourself, right, because what is the reward, the payoff when you are at state s, actaully policy pi. You first look at what's the current reward for this step, and then, OK, what's the future reward? The future reward can be computed by considering all different possible outcomes of executing pi of s, right? So if you apply pi of S, then you have some probability to arrive at s prime, and you multiply that probability with the payoff you get after arriving at state s prime. OK. And one of the important thing here was that this is actually linear in the variable V pi S, so linear in a variable V pi 1 up to V pi-- I think I used m as the number of total states. So this is a linear equation of these variables. So you can solve the linear equation by any linear system solver. And we also introduced this Bellman equation for V star, which is of a familiar form. But the difference is that now you don't have the pi. You have to maximize the action. So you have Rs plus the max. You take the best possible action that maximize the future reward. Right, so now this is not a linear system of equations in terms of V star, but you can use the so-called-- the algorithm we introduced last time was this iterative algorithm that you do the Bellman update iteratively. You can do this iterative algorithm to find out V star. Any questions so far? This is basically a review of the last lecture. So OK, so so far, in the last lecture we have deal with known dynamics. So in all of this, so we have described the algorithm, so and so forth. Everything was under the assumption that the algorithm-- so basically the algorithm to solve the V pi or the algorithm to solve V star, the iterative algorithm. I guess I'm not sure whether you still remember the algorithm. The algorithm here was just something very simple. So you take a loop, and you just say I'm going to update V-- I have a working memory for V. I update V like V S, update it to be something like RS plus max. You just compute the right-hand side and then with the V plugging in here, and then you update the left-hand side with itself. So this is called value iteration. The algorithm is called value iteration. So both the value iteration algorithm and the algorithm that solve the linear system equation, in this case, both of these two algorithms are assuming you have a known dynamics. So the PSA is known. All the family of PSA is known, like in both of these two algorithms because you have to compute PSA, right? So in our language you are saying that this means that you have the known transmission dynamics or the known environment. That's how people refer to these kind of settings. But in reality, what happens is that this PSA is not known anymore. So for example, sometimes you do know it. For example, suppose you consider this is a game like playing Go, right? You are playing Go or chess. So you do know the rules of the game. You know what happens if you play an action a what will happen next, right? So you're going to move each of the piece in some way. So you know the rules, then in those cases P s, a's good. But in many other cases, the transmission dynamics is not known, right, so for example, when you control the robot, right? So in some cases you probably know a little bit about if you control this, the robot would move forward. But sometimes, if you're doing the low level control, you are changing the joint of the robotic arm or the robotic hand. You don't exactly know how the everything moves exactly. You probably have some rough sense, but you are never able to model them exactly. Actually, this is a challenge. So this is actually the reason why now people are using more and more learning techniques. So I think in the early days, I think, for example, there was a company called Boston Dynamics. So what they do is that they basically just use rule-based-- so basically they build this P s, a. They try to figure out from physics what exactly this dynamical systems should look like. And then they build their policies based on that dynamic ecosystem. But these days, I think people are at least trying to apply more learning techniques because there is no way you can figure out exactly what PSA is just from the physical rules. You have to use some kind of learning-based technique. Sometimes also it involves interaction with environments. For example, suppose I have a robot that is moving on this carpet. Then the speed would be different from the robot moving on the hardwood floor, right? So the other environments also part-- so like the floor, the other thing is also part of the environment. So then you can ever model everything perfectly, where you would never know the texture of the floor exactly. So that's why we have to learn the dynamics to some extent. So I think, in some sense, that's the real problem in reinforcement learning where the dynamics is unknown. So when you don't have the dynamics, right, so what can you do? So you have to know something, right? You have to somehow have some information about dynamics. So the typical assumption, which is that-- so the P s, a this is unknown. But you have the-- but given a state s and the action a, we can sample s prime from this transition dynamics. Basically, we can just try the robot in the real world. We can just say I'm going to try my action a in the real world and see what the next s prime is, right? The s prime will involve a level of stocasticity. There's some randomness, but you're going to observe one random sample from this transition dynamics. So that's why when people call sample complexity means that how many times you have to try this, for how many state and action you have to try to see s prime by just try it in the real world. So that's how people generally learn. So we learn by interacting with the environment, by trying all these actions. And you somehow learn the dynamics in some way. So that's the basic assumption. And then there are two types of algorithms in reinforcement learning that are the most popular. I think most of the algorithm can be either categorized into one of these groups. So one type of algorithm is called model-based RL. So here the model means the dynamical model or the transition dynamics. So as the name somewhat suggests, basically, model-based RL means that you explicitly learn the transition dynamics. How do you learn it? There are multiple variants. It depends on the situation. It depends on how complex a dynamic is. But the transition dynamics are the transition probabilities. I guess they mean exactly the same thing for-- they probably just always mean exactly the same thing. But sometimes people have different terms. So basically, you learn this PSA explicitly. You build a model to describe this PSA, and you learn this model from the samples. The samples are the data you learn from, and then you build some approximate PSA from the samples. And that's the typical type of table types of model-based algorithm, so from samples. And there is another type of algorithm, which is called model-free RL. I don't think there is really a precise definition of any of this, but in some sense, the model-free RL, I would just say, is a negation of this, right? So you don't explicitly learn a transition probability. So just the-- doesn't learn the transition. So it sounds a little bit like-- if you don't have any context, this sounds might be a little bit tricky because how come you don't learn the dynamics but still learn the policy? So it's possible. For example, sometimes you can just directly optimize this without learning the dynamics, right? So you can probably just use some like a-- so there are ways to not learn explicitly the dynamics. Of course, eventually, any algorithm needs to somehow have some understanding about dynamics internally in some sense. But you don't have to explicitly build one, right? So for example, one possible option is just that you optimize this function V pi s over the policy pi. Suppose you can somehow take gradient descent over a policy pi, then you can avoid dealing with the dynamics where you don't use the Bellman equation. You just somehow compute the derivative of this with respect to pi. So some of the buzzwords are-- some of the algorithms, Q-learning is one type of algorithm. And another type of algorithm is called policy gradients. So I don't think we have time to discuss any of this model-free algorithm. I just want to write them down here, just the buzzword, so if you happen to came across them, then you know they are model-free algorithm. In some of the quarters we do cover this. Some of the quarters have one or two more lectures, depending on how many holidays there are. And another some of the definition of terms people use here is that, so there is something called, when we say tabular case, or the tabular RL, this means that you have discrete S space, discrete action, and state space. So in other words, the size of the state space is finite. And the size of the action space is also finite. And actually, implicitly you are, in some sense, what is finite or not probably is not the most important thing. The real important thing is that the state and the action space are not too big. They are not like a building or something. They are something that is reasonable. In the last lecture, we are basically assuming this. We assume that the state space has mentries, right? And we didn't talk that much about action space, but we implicitly say that. Actually, all the algorithm required the action space to be somewhat finite. And you can see, for example, this linear system solver algorithm, if you want to solve the linear system equations, what are the variables? The variables are the V pi 1 up to V pi m, where m is the number of states. So if m is infinite, you cannot solve this linear system of equations, where you have infinite variables. Even if m is not infinite, even if m is something super big, say, exponentially big or something like a billion, then you cannot really afford time to solve this set of equations. So in some sense, the most important thing is that you really don't want to have-- if you are a tabular case, you are implicitly assuming that the state space is not huge. But as you can see, sometimes the state spaces just have to be infinite or very big because the state space is continuous, right? So that's another case where you have continuous state space. Sometimes you have continuous action space as well. So for example, the state space is something like Rd. So you have a d dimensional vector to describe the state. And typically, the action space is smaller than a state space. Maybe the action space is something like Rk, where k is smaller than d. Sometimes d can be like even more than 100. Action space typically, if it's continuous, then k probably would be And sometimes you have the combination, right? So you have continuous state space but finite action space. That's also possible. For example, one typical case where you have continuous state space but finite action space is Atari game. You play this Atari game, and the state space is this pixel space where you see the pixels that is shown to you. And the actions are actually finite. You just have a few buttons and maybe some kind of handle you can choose to play. So the action space is somewhat finite. All right, so these are just some terms just in case they are useful when you read some of the other books or literatures. OK, so then in the next 20 minutes, I'm going to discuss how do you do the model-- maybe in the next 30 minutes-- how do you do the model-based URL for the tabular case? So basically, you can see that they are model-based, model-free, and tabular continuous, where it's like you have like four combinations. And we're going to do the model-based plus tabular. And this is actually not very hard. So what you really want to do here is that basically model-based, tabular-- you want to learn an explicit model that's kind of somewhat similar to true model PSA. So what you have is that you have a collection. So suppose we have a collection of trajectories. I forgot whether I defined trajectories. By trajectories, I just mean your sequence of state actions. So suppose you say you have some state s0. Maybe let's say we start from state s0, and you take some action, maybe a0, and you arrive at s1. And you take some action a2, a1, and you arrive at S2, so and so forth. So this is the one trajectory, a sequence of state actions, right? So so far I didn't really tell you how I got the structure, but I'm going to talk about how do you learn the transition probabilities from some given trajectories. So suppose I give one trajectory, and often you need more than one trajectory. So then you say, after I take a bunch of steps, maybe I take two steps. I reset. So I reset, and then I get another trajectory. Maybe let's call this, trajectory s01, a01. I have to superscript just to indicate that this is the first trajectory. And then I restart. I get a new initial state, and I call this s02, and I apply a02, s1 2, a1 2, so on and so forth. And maybe I can get more. I get a bunch of trajectories. Here, the subscript is indexing the time, and the superscript is indexing which trajectory you are in. And then how do you estimate the transition dynamics? Recall that all of these state and actions are discrete variables because I'm in a tabular case. So basically, I just have to estimate P s,a as primary. What's the chance to start with S and take action a and arrive at s prime? So this is, in some sense, the problem is the same as some of this-- I think we discussed this generative learning algorithm, where we have this event model, where everything is counting-based. So basically, you can compute a maximum likelihood of this transition. You view this as a parameter, right? So this is something you want to learn. But this is a parameter. And then you try to find out the maximum likelihood estimate for this parameter. And it turns out that it's just-- as usual, it's the most natural choice. Basically, you just count the frequency to see this, right? You say, in the denominator, you say, I look at how many times-- let's say we took action a at state s.. So you basically look at all the cases where you take action a and state s. And then in the numerator you count how many times we took action a at-- I guess maybe I'm a little too wordy here. So basically, the number of times if you take action a at state s, and you arrive at s prime. So the denominator is the total number of times you take the action a at state s. And the numerator is how many-- among all of these occurrences of s and a, how many times indeed arrive at s prime. And then that's your transition probability. That's your estimated transition probability. This is an estimate for P s,a. Right, so any questions so far? And once you have this, it's just a counting-based algorithm. You just count how many fractions you really arrive at s prime. Among all the s,a, how many fraction of those really arrive at s prime? And the empirical frequencies is your estimate for the transition probability. Am I using the right thing? I should use the black color. OK, and once you have this tool to estimate the transition probability, then you can have a model-based RL algorithm. I'll still use the red if it's OK. The black one doesn't seems to be very good. So the model-based RL algorithm is doing the following. So it's pretty intuitive. So first of all, you initialize some policy pi, maybe randomly, let's say. And then you have some data set. Initially, you have no data. So the data set d is empty. I'm just defining notation basically. I'm going to use this data set d in some way. And then what you really do is that you say, I'm going to-- I have two steps. The first step is that I'm going to estimate. I'm going to collect some data. So collect data by executing policy pi. So if you execute policy pi in a real environment, you're going to get some samples, right? That's our assumption. Our assumption is that you are able to get samples from the real environment. You don't know the P s,a, but you can get samples from the P s,a. So you actually have policy pi to get your environment. So basically, what you do is you get a family of the environment. Sorry, you execute your policy pi to get some samples. And the samples who got, let's say, they are denoted by s01, something like and you apply a01. And then you get s1 And you have s1, s02, right? This is the same set of samples here. So you get some samples, and then you add all of these trajectories at-- the trajectories to D. So D is kind of like a set of data. And then, I guess I'm using the board space in a awkward way. How do I-- I think I'll just have to erase. So let's see, I'm going to-- so you've got this data, this kind of data, right? So this is a set of data that is like this, OK? So and then you add them to D. So on the second step-- this is the first step, you collect some data. And then in the second step you estimate the P s, a using data in D, OK? And let's still use P s, a as the estimator, right? Suppose we get some P s, a that is supposed to be an estimate for the real transmission dynamics. And then in the step c, so you can use, for example, it's value iteration. It could be also policy iteration. I guess we didn't have time to discuss policy iteration in the last lecture. But if you are interested, you can read lecture notes to see what's the policy iteration. But let's suppose you use value iteration to get V star, the value function, for the estimated P s, a, the estimates dynamics. So just pretend that the estimated dynamics is the real dynamics, and then you solve the best policy, the optimal value function for that dynamics. And then you take pi star to be-- you take pi to be the optimal policy for the estimated dynamics. OK. So are we done? So it sounds like we are done, right, because we estimate some-- everything's simple. You collect some data. You estimate the dynamics, and then you get the best policy of one of the estimated dynamics. But actually, what you really have to do is you have to have another loop, outer loop that repeat this process. So what you really need to do is you have to take a loop. And so after you have some policy pi, you want to collect some more data using the current policy pi. Initially, the policy pi was random, and you collect data from the random policy. And after you get some policy pi, then you should take another loop to collect more data. And then re-estimate your transition dynamics, and then recompute your policy. And then you keep doing this. You probably don't have to do a lot of loops, but you have to do some iterations. So the question is, why you need this loop, outer loop? Why we can't just go with the first dynamics we have estimated it by? So if you ask me one dynamics, if it's accurate enough, then why we cannot go with that? The reason is that in RL there is this problem with this so-called exploration exploitation tradeoff, which I'm going to elaborate. So the immediate problem is the following. So it's possible that in the first round, when you collect data, your data is not very good. It's very bad, low-quality data, right? So for example, suppose you want to control a robot. And you initialize a policy randomly. You just do some random-- you just push the random buttons or control the robot in a random way. Then the data you collect are basically just some kind of-- the robot is just wiggling around a little bit. It doesn't really move much. So the data you collect is actually very, very bad. And then even you have a lot of data, you see your data call is not good enough just because the robot doesn't really do much. And then your estimated dynamics is also not going to be good enough. And then your policy is also not going to be good enough. So what you really want is you want to do this iteratively so that next round, when your policy is reasonably OK, you collect some higher quality data. And then you do this again, and then your policy becomes even better. And then you get even better, higher quality data. So that's why you have this loop. Another example is the following. So suppose you have-- this is another example. Suppose, for example, let's say, I guess you probably all-- some of you have used this kind of automatic robot to do the cleaning for the house, where you have this small vacuum, a robotic vacuum that can move in your house. So if you have used that, I think how it works is that it first explores your whole room and to try to figure out what your room looks like. And then the next round is going to take some trajectories to clean your room so that it covers every part, right, so something like that. However, but if you think about this where suppose you have, for example, say something like a big room. And then you have a small room adjacent to it. So you have some robot that tries to clean the room and navigate through the room, right? So what if at the beginning your robot only goes to this part, right? So in the first run, your policy just only look at this room. Then your dynamic model will only be able to-- it's only accurate for this room, right? You only know what's happening in this room. What's the chairs? What's the stairs or other kind of like-- so far, so and so forth, right? But you don't know anything about this one. So that's why you cannot-- so there's a typical situation where the quality of the data is not good enough because your quality of the data doesn't even cover some part of the room. So then if you don't do anything special, then your robot wouldn't have incentive to go to this room because the robot doesn't even know the existence of this room in some sense. So this is actually an even more challenging case because here, even you just do this loop, you wouldn't be able to necessarily-- wouldn't be able to figure out the small room because at the beginning you just only see the large room, and you never look at a small room. And then you figure out the optimal policy to clean a large room, and you still don't know the existence of the small room. And you just keep doing this. Eventually, just only clean the large room, and you just never know the existence of the small room. So in these kind of cases you need even something more than this kind of algorithm to be able to work. And typically, this is a phenomenon called exploitation versus exploration. So exploitation basically means that you believe your current transition dynamics. You just strongly believe their current transition dynamics or your current understanding about the world, the environment, right? And you just try to find an optimal policy for the current understanding of the world. And exploration means that you try to explore different strategies to see whether you miss anything in this world. So in this case, maybe you missed the small room. So you want to do exploration to figure out the existence of the small room. And exploitation really just means that you just basically do the best thing for the current map. So as you can see, you need some exploration to at least cover the entire world so you know the existence of any other options. So typically, if you really want this kind of reinforcement learning algorithm, you have to add some randomness in the policy pi so that you can have some exploration, right? So you don't want to just always, in every round, you just always collect data from the policy pi that is optimal for the current environment. You also want to have some exploration by adding some randomness. So basically, what you really do is that when you collect data, you add some actually pi with some randomness. So in this case, you have some small chance to go into the small room so that you can see the small room. And then you figure out should actually kind of clean that small room with some kind of trajectory, with some actions. So maybe another example is that, for example, suppose you can figure out which restaurant you want to go in Palo Alto downtown, right? So suppose so far you know two restaurants, and you know one of them is better than the other for you. And the exploitation would mean that you just always go to the better restaurant for your taste, right? And then you just keep go to that better restaurant for you. However, you may also consider some exploration because there are many other restaurants in Palo Alto, and you don't know the existence of them even. Or you don't know whether they are good or bad. Or you don't know their taste, whether their taste fits you. So exploration means that you should try some of the other restaurants even at a risk that those restaurants are not as good as the one you have known. But you want to try to explore those restaurants. And exploitation means that you just believe that OK, there is nothing I should explore anymore. I believe in my current evaluation of all the restaurants. I just take the best one and keep going to that one. And there is a tradeoff because if you keep doing exploration, then you keep trying all different restaurants, then inevitably you are going to find some restaurants that are not very good, right, and you will suffer. You're going to say, OK, this doesn't worth the money. You can have some bad dinners in some sense. But on the other hand, if you only do exploitation, you're going to miss other opportunities. So it really depends on-- and if you really go into the RL literature, there are different ways to trade off these two things depending on how confident you are with each of the other choices. You may decide to exploit more, or you might decide to explore more. OK. But I think we are not going to have enough time to go into the details. There's a huge literature here. Even just when you talk about this going to restaurants, which doesn't have a sequential aspect. You just go to the restaurant and have lunch or dinner. There's no sequential decision. You still have to think about the exploitation and the exploration tradeoff. So to do it optimally, you have to be somewhat careful. OK, any questions so far? So [INAUDIBLE] when we are doing the exploration, so we know that we are getting better understanding about the world we are in or we are just trying to find random. Yeah, so I guess, maybe you're asking about how do we do exploration, right? So what's the guiding principle for doing exploration? And it seems to suggest that one principle could be that you only want to explore when you can collect more information. Yes, I think that basically it's pretty much like what you said. But sometimes random actually can serve that need. So if you just do random perturbation of your current option, typically, you get a good amount of information. But of course, in some other cases, you have to directly go for those in certain places. Actually, you are exactly right. So for the tabular RL case, typically, what people do is not just random exploration. What people do is you say, you take those actions that are most uncertain for you. So suppose you have some action that you don't know what the outcome would be. You have no idea what outcome would be, or you have very little idea. We have huge uncertainty about what the outcome would be. You try those actions more as exploration strategy. But for the continuous state space, so I think it turns out that most of the algorithm works are a local randomized exploration. You don't try some crazy option. You try in your neighborhood. I think that's actually probably make a lot of sense in those cases where you have so many different actions, right? So for example, if you think about your career planning, if you really have-- suppose you-- in theory, you have really, really a lot of actions you can try. Instead of being at Stanford, you can try to be a professional soccer player, right? So you can be a musician. You can be many different-- there could be many things. And you are very uncertain about some of those, probably. So I wouldn't know if I try to be a musician what would happen. But I think we have so many actions, typically-- I wouldn't say-- this is not 100% true, like when you talk about technical details. But typically, we have so many actions, I think somehow the algorithms-- most of the working algorithms tries to explore locally. So for example, I'm a student here at Stanford, and then I try something somewhat similar, maybe going to intern at Google or maybe try in graduate school or something like that. But I wouldn't try something completely different. No, we don't have a really strong theory to say exactly what you should do. So it's a mixture of some theoretical explanation and some empirical observations. OK. So I guess, so I'm going to use the rest of the 40 minutes, or 30, 35 minutes to talk about continuous state space. So I have talked about the model-based RL, right? So now I'm going to talk about model-based plus continuous state space. And the idea is pretty much similar. It's just that you have to-- somehow you have to deal with the continuous state space. You will see there is some challenge. By the way, this reinforcement is interesting area, at least when I-- after I learn it, I feel like it has a lot of things to do with your life decision as well. Of course, you can't only It's not like you should implement the algorithm in your life decisions. But there is some insights theory that is useful for the general-- You can model your life as a reinforcement algorithm. It's just that, one difference is that in real life you have much more information. So in a theoretical formulation, you are only collecting information from the samples. So you know nothing about the environment at the beginning, and anything you have to try it out. If you want to know anything about PSA, you have to try it out. And sometimes you can try more, and sometimes you can try a bit less. But you still have to try. In life decisions, in many cases, you don't have to try. You can already predict, to some extent, what the outcome is. Other than that, I feel it's pretty similar. Just my two cents. Anyway, so what do you do with continuous state space? So one easy case, just to start with, is when D is 2. Suppose the state space is only dimension 2. I think in this case, basically, your state space is a two-dimensional plane. You have maybe 2 axis. And one way to do it is you just discretize your state space into discrete variables. So before each of the state is two real number, but then you say I'm going to discretize the state space. There's some boundary, of course, because the state cannot be too big. And then you discretize something like this. And then for every cell, you say, all the states-- and there are even number of states in each of the cell. But you say all the states in that cell is going to be treated as one state just because they are all pretty much similar. So as long as you have a fine enough granularity here, then you can basically treat every cell as a single state. And suppose you have a granularity of epsilon, then you are going to have We have epsilon choices for the first one. We have epsilon choices for the second one. So you have 1 epsilon squared choices of states. And you can probably take epsilon to be something like 0.01. I don't know exactly. But you only get a reasonable number of states. Maybe quite a lot, but not maybe But it wouldn't be too bad. So that's the easy choice. However, this doesn't really work for many dimensions higher than two. I think when dimension is because, if you think about you are discretizing a three-dimensional space, then suppose d3, and you do this 1 over epsilon. So the size of each of these cell is epsilon, then you need 1 over epsilon I guess it depends on what epsilon you choose, but if you choose epsilon to be something like 0.01, then you have 100 to the power 3. That's like a minute, which is already a lot, right? So maybe sometimes it's still OK, but generally, when d is 3, it's already tricky. And then you can see this doesn't scale very well because if you have d-- for any d, this would be 1 over epsilon to the power d. And when d is 5, basically, it's kind of impossible, completely impossible. So we need a other approach for-- but actually, just to clarify. So when d is 2, actually, this is typically a pretty good idea because it's simple and clean. And we don't have to deal with any other complications. The only thing is that you have But there's no any other complications. It's actually a pretty good solution. I think it probably should work in most cases. But when d is more than 3, it's going to be a problem. So basically what we do is we are going to redesign everything that we have discussed with continuous state space in mind. So I guess there are two questions. One question is how to learn P s, a for continuous state space. And then question two is how do you do the value iteration of both for continuous state space? So for question one, so let's discuss each of them. And the basic idea is that you try to kind of extend what you have done to the continuous state space in some way. So regarding question one, so how do you do it? The first question probably is that how do you even represent the PSA, right? So now you have infinite number of S here. So before, for every S and a you have a vector to represent, right? So for every Sa, this P s, a is a distribution, and basically it's a vector over m possible choices, if m is number of states. But now you have even number of S here or maybe exponential number of S there. And for every P s, a, this vector is actually a very high-dimensional vector, maybe an exponential dimensional vector or even dimensional vector. So how do you even represent this? So the idea is that you can change the way to represent it. You don't represent this-- you can do the following. There are many ways, but this is one way that is probably common. Learning a dynamics. So what you do is you first say I'm going to model this process as prime example from P s, a by assuming S prime is equal to f s, a plus some noise. This is one option, not the only option. But one option is you assume that S prime is computed by applying some function of s and a, and then add some noise. That's my way to sample S prime from this distribution. So this is a way to define a distribution. The distribution basically has mean f s, a. And some Gaussian has some variance, like a ksi, the same as the ksi. So this will give you a random variable s prime, given s and a. And this f s, a let's say, maybe f s,a is deterministic. This is just some function you want to learn. And this part is the noise that gives you the stochasticity. Maybe let's say ksi is maybe it's from some 0 with some covariant sigma. So sometimes you just, I guess, probably have to see that this is almost the same as what we do with supervised learning. You just treat this as a supervised learning problem. So s prime is my label. s and a are my inputs. So I'm just trying to predict my label from the inputs. And the way I model this is I model this by some function plus noise, right? And then you can do maximum likelihood. And the maximum likelihood is just the square loss. And you learn this model f by some square loss. So for example, you have to introduce some parameterization for f. So for example, f could be a linear model. Suppose you say you have some parameter A and B, and your f could be just A S plus B, little a. So this could be the f, which is parameterized by A and B. So A and B are parameters S, and they are inputs. So you just have this linear model. I think this is called linear dynamical models. So this is a linear model. And another option is that you say, I'm going to say f s, a is equal to maybe-- you can use the same idea as the feature like when you do the kernel. You can say this is something like A times phi of s plus B times-- I don't know. I forgot what this is called. We just have some other feature, something like this. So both of those two phi's, these are some features. So this is like what we did with kernel method, right? So you introduce some features, and then you are linear in the feature space. Of course, we can also say that f s, a-- so I guess here the parameter is A and B as well. You can also say f s, a, if they are parameterized some theta, is a neural network neural network applied on s and a, something like this. And this new network is maybe say parameterized by theta. So you just say s and a are concatenated as inputs of a network. And you apply this network with parameter theta. And the output will be f Sa. And in each of these case, you can model your s prime like this, and then you can try to find out the-- and this becomes a supervised learning problem. So what I mean is that once we have the parameterization right, so then you can-- the learning loss function is that-- say suppose you have some data. Suppose data are something like-- I guess, I've written this several times-- S01, a01, S1 1, so and so forth. So I have a bunch of trajectories. Sorry, I messed up the index. So you have these trajectories, and then you break these trajectories into three tuples. So you mean that you view this, view them as a collection of three tuples. So you say you have s01, a01 comma s1 1. So this is the first three things here, and you view this as the input and this as the so-called label or output. And then you say, I'm going to have these three things, which is s1 1, a1 1, and s1 2, right? So this, again, is the input, and this is a label. And you do this for every three tuples in every trajectory. So you basically get a sequence of like three tuples. So eventually you get s-- t minus 1 n-- n is the number of trajectories-- on a t minus 1 n I guess the-- a lot of indexing here. But basically, just view every three consecutive numbers as a tuple, and you view the first s an a as to the input. And the outcome as the output, something like this. OK. So you have a data set of this, right? So this is a data set of size n times t. n is the number of trajectories. t is the length of the trajectory. And then you do a supervised learning, so you just use supervised learning. So you do some kind of regression, let's say. You say I'm going to minimize over my parameter. Maybe let's call it theta. And I minimize this over-- I minimize the loss over all data points. So what is the loss? The loss-- so there is i. There is also a t. i is indexing the trajectories, and t is in the indexing the timestamp. And every time you want to predict what you want to predict S t plus 1 in the ith tracjectory using this function f theta s t i a ti. So this is the input of this supervised learning problem. You apply the model, and then you try to match it with the output, a label of this problem. And here I'm using the L2 norm because the label is a vector. So in many of the cases you see this parentheses square because, in those cases, when you do the typical supervised learning problem, your label is a real number, right, house price prediction, right? So the label is the house price, which is a real number. But here the label is a vector. It's the same. You just do the L2 norm squared. Sometimes you can change the loss function. You don't have to always use L2 norm squared. Maybe you don't even need to square sometimes. You can use other loss, L1, something like that. And once you do this, you get a theta. And this f theta will be our model. Any questions? So I think the benefit here is that before you have to specify P s, a for every s and a, right, as a vector, right? So now, you learn this function f theta. So the number of parameters is theta, and then for every s and a you can compute f s, a if you know the theta. So that's how you do the model-based RL part. Sorry, that's how you do the model estimation part, how you estimate the dynamics. And then you also need to deal with the value iteration. How do you do the value iteration for continuous state space? I don't think I have the math there. So when you do the value iteration, before you are trying to update the value function for every state for every time, right? So now that's not possible anymore because you have so many states. You cannot update the value function for every state. Actually, it's not even clear how do you even describe a value function because before the way you describe a value function is you say, for every V star s, I have a number, right? But before you describe it by listing all the s. For every s you have a number. That's how you describe V star. But now you cannot really describe it like that. So what you do is you say I'm going to parameterize the value function by, again, kind of like before, by some kind of network or linear model, right? So you can say I parameterized-- I write my V s as on something like maybe one choice is you say this is theta transposed have some feature of s. So this is a feature, and this is the parameter. That's my option. And another option is you say my V of s is a neural network with parameterized theta applied on a state s. So then theta is a description of the value function. And you need to learn theta. Of course, there are other ways to do this. For example, one way to do it is that you can, for example, design the right features using physical intuitions. Sometimes you know that some coordinates it's only meaningful when it combines with other coordinates in some meaningful way. So I think some mixture of this could also work because you could design some features and then use these features as inputs to a neural network. But generally, you just want to have parameterized form. So we have done-- so after we have the representation of the value function, the next question is, how do you do the update, right? So before what we did was that-- so recall the update was something like V s is able to be something like Rs plus gamma max a s prime. That's what we did. So we have some working value for V, and we compute the right-hand side of the Bellman equation. And we update V with the right-hand side, and then we repeat. But you cannot do this because you don't have-- before we do it for every s. But that's not possible anymore. So what we do is we try to make this is true for the s we have seen. So basically, for continuous state space, so for the continuous case we just try to ensure a Bellman equation for states that we have seen. It only ensure this for every state s anymore. You just ensure it for the states you have seen. And if you just want to do that, then you can do some kind of loss function to ensure that. So let me elaborate on what do I mean here. So the first step is the following. So you estimate the right-hand side of the Bellman equation for states say s1 up to sn. So I haven't told you exactly how I got these states, right? So suppose I got some states that I have seen in the algorithm. And I want to only ensure this Bellman equation-- ensure this to be equal to this, or somewhat encourage they are the same only for this choice of states, s is equal to 1 of this. That's my compromise because I cannot do it for every state. So what I do is that I try to first compute the right-hand side. So how do I get the right-hand side for every state S? So what I do is I just-- so basically I want to compute R s i plus this max thing. But the problem is that here I have a sum over s prime again. That's, again, a lot of different choices for s. So what I do is I'm going to first turn this into an expectation. We write this as expectation of V s prime. And s prime sampled from P si a. So after you have the expectation, you can use an empirical sample to estimate this quantity. So basically, what you do is you say, so for every-- by the way, I think I forgot to mention one thing, which is that now I'm talking about continuous state space but finite action space. Just this is a slightly simpler question than both of them are continuous just because I don't want to over complicate too much. So it's continuous state space and finite action space. So what I do is that I first estimate this by sampling some s prime. And then I take the max because you have finite number of states. Actions you can just take the max. So what you do is you say for every A you get k samples. Let's call it s1 prime up to sk prime. These are sampled from these transmission dynamics P s, a. P s as i a, right? So the state si, you try the action a, and you see what is the outcome. You have a bunch of random outcome. And then you define Q a to be Rs i plus 1 over k times this average. You use the empirical average basically. The empirical average of PV sj prime. So this is supposed to estimate Rs-- I think I'm missing a gamma here, sorry. It's supposed to estimate Rs plus Rs i plus gamma expectation V S i. So I'm supposed to use this to estimate this without the max. And then I take the max. I call y i to be the max of this qa. So then this is supposed to estimate this one. So y i is supposed to estimate this quantity. And for every i I have estimate, right? For every i have estimate for the right-hand side of this Bellman equation. And then I ensure that this Bellman equation to be somewhat correct for all the i's that I have considered. So step-- so step b, I need to enforce like V Si to be close to y i because, recall that we have spent so much effort to compute this y i, it's y i supposed to be the right-hand side of the Bellman equation evaluated at Si. And I want this to be close to the left-hand side. And how do I do that? I do that by, say, I'm computing theta to be arg max of this loss function that encourages this. And the loss function is simply just some L2 loss. Say I take the average over all the possible Si's, and I say V-- my Vsi is parameterized by some say, network, like this way. So let's only talk about neural networks, but it works for anything. So say this is a neural network with theta applied on si minus y i squared. So basically you want this neural network to output the right value function that matches the target. The target is the right-hand side of the Bellman equation. Actually, in the RL papers, you do call this target. So this is the target you want to match. And you want to choose the thetas after the theta such that your left-hand side, which is this neural network Si to match the target. And that's your theta that you got in this step. And again, you have to do-- a survival iteration you have to iterate because, if you get theta this way, you have to iterate to recompute your right-hand side and then up to the left-hand side, where you call it, do the value iteration. You compute the right-hand side and you update the left-hand side. You do this iteratively. So eventually, you also have to have iteration, which is say-- so you iterate between these two steps. So you say you have a loop, but I think there is a little bit-- did I specify everything? Yes, I think I specify-- OK, so this is the loop-- maybe sorry. Yes, I should save some space before. So let's say this is a and I have a loop here. But this is still not enough because this loop is doing what? This loop is only doing the value iteration for this s1 up to sn. I haven't told you even how do you get S1 up to Sn. It's just a bunch of states that you have seen. And also, you can update-- you can do this for every s. Value iteration is not enough because our model-based algorithm has something even an outlier of the value iteration. So you're estimating the model, and you do value iteration. And then you estimate the model again, and you do value iteration. Or you have to do another outer loop to update your samples, right, because when you do the model-based algorithm, you already have a loop all set of value iteration. So this is the loop for the value iteration, and then we have another loop outside it, which is supposed to iteratively update the model. So basically, you have another loop, and in this loop you collect some data, say s1 up to sn. We collect this data from current policy. You collect the data, and then you do the value iteration on this data. And then you collect it again, and you do value iteration. So this is how it imitates the model-based RL algorithm for tabular case because for the tabular case we also have this outer loop where you collect data. And you do value iteration, and you collect data do value iteration. So OK, sorry, I think I'm missing one more step. You collect data, and also you need to also-- in this also loop you also have to estimate the dynamics. Collect data, my bad. So collect data, maybe 1, and 2, you estimate dynamics. And then 3 you do the value iteration. And the value iteration consists of another loop, which is this. Any questions? I see some confusing faces. So basically, I think I erased that. Oh, actually, it was partly here. So this was the part that we deal with the tabular case, right? So we have the two steps. I think maybe I should rename them s1, 2, and 3. And there was one step that was erased, which is the first step, which is to collect data, right? So before, when you do the tabular case, you alternate between the three steps, collect data, estimate the dynamics, value iterations. And now, for the continuous database, number one still the same. Number two was done by this part of the board, where we learn this with the supervised learning. And number three is done by here, this part of the board, where we do this A and B step, the A and B to do the value iteration. So number three is this part, where we do the value iteration. And eventually, we have to do a loop for this 1, I guess that's all for today. And for this whole course I guess we have covered supervised learning, unsupervised learning, and reinforcement learning. Yeah, and we try to cover more and more deep learning these days. But also, some of this math are included mostly because I think these are the foundations of machine learning. So I hope you had some fun with the course. Thanks. [APPLAUSE] |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Factor_AnalysisPCA_I_2022_I_Lecture_14.txt | Thanks. Yeah. I want to go through factor analysis and continue our tour of EM, because it puts us in this position where we are going to have to make some pretty serious modeling assumptions to make progress. And it's kind of going to force us to walk through what that looks like in an unsupervised way. So that's what the factor analysis piece is. We will then cover a little bit of PCA, which is an old standby. And it's good to contrast these two kind of methods together. It's kind of the old, more, less Bayesian, less probabilistic standby that people use a lot in unsupervised learning. Just to give you a sense of where we are in the course, up next for us will be this problem called ICA, which will be great. That's going to be in your homework. It's always one of people's favorite homework problems. This is the Cocktail Party Problem, which we'll go through. And if we have time-- I'm not sure we will. But if we have time on our current trajectory, we'll go through something called weak supervision, which is a setup that looks just like EM, this Expectation Maximization setting, except for we can solve the underlying problem exactly. So some folks were asking questions last time, hey, why don't I run method x, y, or z on this? And there are these latent problems where, actually, you can cut to the chase and exactly solve to the right answer. You don't have to run this back-and-forth style algorithm. And so we'll see that if we have time and give you a sense of why that's interesting. This is more modern stuff that we'll talk about. All right? Now, on to factor analysis, this is what we're going to talk about. And this is the setting where we have more dimensions than we have data points. Now, just by way of history, when we used to give these lectures about factor analysis, it seemed like kind of an odd situation. Why would you have so many more dimensions that you cared about in your model than you have data points? But actually, weirdly, if anything in the last couple of years, modern machine learning has switched to that being almost the default. We tend to train much, much larger models, as you saw, than we have available data. And there's a couple of reasons for that. They're not the reasons that are in factor analysis, just to be clear. But this setting is actually a pretty interesting one. And so we'll see how people addressed it initially. Awesome. So let's talk about factor analysis, factor analysis. All right, so we have many fewer points, data points, than dimensions. That's the setting that we're in, OK? And terse notation, this is n is much less than d. OK? So it's worth comparing this, by the way, with GMMs. In GMMs, if you remember, we had only a couple of parameters, right? We had the source centers. We had the source variances. And we had the fraction that were in each source. But we assumed implicitly-- we had a ton of photons, if you remember, from all of the different things that were going on, from all the different sources. And so n was much, much greater than d. OK? And that was implicit in what we were doing, but it's worth calling out. And the reason is we would average over some sources, and we didn't have to worry about a couple of problems, which we'll highlight in this lecture. OK? All right, so let me give you an example of how this happens because at first, you may look at that and say, well, is that really realistic? And even though I tell you it's going to become the default setting, it's good to have something concretely in mind. So what's one way that this happens? So one example, which is actually Stanford-based, is that we could place sensors all over campus, OK, all over campus. And let's say those sensors read tons of information, temperatures, and wind speed, all kinds of things. And so they record at thousands of locations, at thousands of locations, locations, and values, OK? So just a huge amount of information-- but so d is in the thousands or tens of thousands, or tens of thousands. OK? So these are like little smartdust sensors everywhere. OK? But they only record for 30 days, but only for 30 days. So we get-- n here is we get 30 samples. Now, the sample we get, if you like, is a measurement across the entirety of campus. So it's like we're getting a matrix that's in the opposite direction from what we're used to. It's skinny, but it's not tall and skinny. It's n rows, and then the rows are really, really big. OK? All right. So I would just point out, maybe it's not obvious why. It will be obvious, hopefully, in a second that we would want to fit a density to this. But it kind of seems hopeless. And the reason it seems hopeless is every model that we've had had a parameter for every dimension. And so now there's a huge number intuitively, a huge number of models that are out there, that will all satisfy this data. If we were to run something like least squares, if you only gave me 5 examples, and you had 1,000 dimensions, there are tons of models that line up on those 5 points, right, that touch that rank 5 subspace. But all the rest of them are free. So how do we pick from it? And if we try to do a probabilistic thing, it gets even worse. OK? So let's see the key idea. And if it's not obvious to you that it's a problem, we'll show you something very concrete in a moment that it's a problem. So this is mainly intuition right now. This is not mathematically grounded. We'll show you the math in a second, and what will happen is you'll see our equations will just break in some fundamental way. And so our key idea to get out of this is that we're going to assume there is some latent structure-- because that's the tool we're using right now-- random variable that is not too complex. And not too complex means that I can estimate it. That's really what it means operationally; and captures most of the behavior, captures the interesting behavior, right? And this is the tired maxim in this course that all models are wrong. Some models are useful. Right? It's not going to be a perfect model of what's going on, but it captures most of the interesting behavior in this relatively compact way. And so we'll see, at the end of this little section, we'll see a concrete generative model that builds with these building blocks and allows us to recover those parameters. That's the key idea. OK, so let's see the first problem with why GMMs would have challenges here and see the first example. And actually, we're going to look at something even simpler than GMMs. We're going to look at fitting a single Gaussian. OK? So let's try and fit a single Gaussian in the situation so I can make more concrete my assertion before, like it's hard to fit models here. So let's fit one Gaussian. OK? So the Gaussian has two parameters. Recall there's mu and sigma squared. So how would we be tempted to compute them? So first, we want to compute the mean of our data. Well, that actually seems to make sense. We compute the sum here-- oops-- sum over all of our data 1 to n. So just to make sure, let me write the data because I should have written it out. We have data points Rn, element of Rd, just to make sure those types are in your head. OK? This is OK. This works fine. We compute the mean. That seems fine. It doesn't matter how high the dimensions are, right? You can compute the mean, and it's a sensible thing to do. Right? It may not be a great estimate of the center, because you may not have all combinations of the directions. But it's still an OK enough estimate. The trouble comes when we look at the covariance. And I'm going to write here the covariance in the full general form because we need to talk about the dimension. Because the dimension is high, so I can't just use the 1 sigma. And if you remember, you have some expression which roughly looks like this, Xi minus mu t. OK, so that's a fair enough quantity. You can form that quantity. But one thing should cause us a little bit of pause. What's the rank of sigma? Well, generally, we've been assuming, when we did linear regression and everything else, the rank was at least d, the number of dimensions. It was a full ranked object, right? But the problem here is that d is much bigger than n. And I've just shown you that sigma can be written as a sum of n rank 1 vectors. And if you remember your linear algebra, that means that the rank here is less than-- which is always true-- the minimum of n and d, which, in this case, is going to be strictly less than d because n is smaller than d. That's what we're assuming. OK? So this isn't full rank. So where does that cause us a problem? Well, we have to go back and write our favorite function, the Gaussian likelihood, and see where this causes a problem. Likelihood-- OK. So remember our favorite thing looked like this. Sigma is equal to at 1 over 2 pi sigma 1/2 exp-- xi minus mu transpose sigma inverse xi-- oop-- oh, there should be no xi. So it should just be x. Let's get through this. Get to the bottom of it, and so on. OK, awesome. Why is this formula problematic if sigma doesn't have full rank? Well, there's two parts that kind of don't make sense, right? One of them is this. If it's not full rank, the inverse doesn't make sense. It's not even defined. And second, this is the determinant. What's the determinant? Now you're toast because that's 1 over 0. This is now infinity, or something like it, or undefined. No matter what, it's going to cause you a problem if you start to normalize by something like this. OK? So how are we going to fix this? Right? So this, by the way, just to make sure that's clear, this determinant is equal to 0. Now, we're going to fix these issues-- just one second-- we're going to fix these issues by changing the model. And that model is going to come out. And the thing that I want you to think about is we're going to try and make that covariance full rank. OK? And we're going to see various assumptions that we can place on the underlying noise model-- this will make sense in a minute-- that allow us to insist that sigma, in fact, is full rank. Yeah, you had a question. Yeah, I just [INAUDIBLE]? Oh, because the determinant is a product of the eigenvalues, and at least some number of them are 0. Cool. Awesome. OK, so the way we're going to fix this is we're going to examine these simple models. Now, I'm going to put these models pedagogically. And why I'm doing it in this order is the final model is going to take these building blocks and put them together. OK? So if you were really into the suspense of what's going on with our stigmas here, I've ruined it for you. That's what's going to happen in about 20 minutes. But what I'm going to show you are different ways that we can get around this challenge, so that we can estimate the covariance, get a Gaussian likelihood for that setting, and they're all going to boil down to ways that we make sure that our covariance is parameterized by something fewer than all of the roughly n squared parameters. Covariance has to be positive semi definite. But it still has a lot of parameters. So it's not all full. Not every matrix is a covariance matrix, but a lot of them are. All right, so let's go through it. Now, let me just recall the MLE for Gaussians because it will make our life a little bit easier here-- whoops. The reason I'm recalling this is I want to store some facts or remind you of some facts that will be useful. Now, one thing, just so you're clear on using this, these are just helpful things to know about computing these EM models again and again. So hopefully, this is useful practice even if you're not computing these. So remember, the way we fit these things to data looks like this. We sum over all of the data. We take the log of 1 over 2 times the likelihood exp of-- oh, sorry, there's a negative 1/2 here. That doesn't change the story. inverse x minus mu, and so on. OK? Now, this is equivalent to minimizing the log, which we do everywhere and expand out, which is equivalent to minimizing this. And you've seen this before. I just want to remind you where it comes from. Sum i equals 1 to n x minus mu transpose sigma inverse. This is why I consistently drop the 1/2 because it doesn't matter. But it is distracting. OK? All right. All right, now, one key fact, which we will use again and again, and I'm just going to dispense of once, is if sigma is full rank, the covariance is full rank, then sigma u of this function here-- so I'll just call-- whoop-- f of mu-- or fi of mu is equal to the sum i equals 1. Let me extend the purple blob, pink blob, 1 to n, sigma inverse xi minus u implies mu equals 1 by n sum i equals 1n. So I apologize if this is too pedantic. If not, what I'm saying here is, this is a plug-in. If we know, no matter what, that sigma is full rank, then I can use this estimate for mu. OK? So I was saying before that the average was no problem. So long as sigma is of the right form, I can always do this. And remember why. If this is full rank, I can pull it out. This is the same argument we went through last time. So ask me a question if it's not clear, because you have seen this before. And so then we satisfy for mu. We can just use that as a plug-in for everything we do later. Sure. Well, why couldn't we only put it out if it's full rank? So if it's not full rank, so let's imagine that it had some null space. Then this x minus mu could be satisfied by either being identically 0, which would be in the null space no matter what, or anything that's co-linear with the null space, that has-- so if the null space is a particular direction, then it doesn't define mu. It can be anything along that direction will be set to 0 by this, right? So it means the mean wouldn't be the unique minimizer of it. Cool. Yeah, it'd be 0 plus nothing Go ahead. Oh, so it affects the [INAUDIBLE].. That's exactly right and the summation. It was just so I didn't have to copy it again. Yeah, this is just exactly this. And yeah, this is the exact calculation we've seen a couple of times. [INAUDIBLE] complication on the first line that we have done out here, right? [INAUDIBLE] Yeah, so this is basically saying, these two are equivalent, right? We saw this before, that if we wanted to maximize likelihood, it was equivalent to minimizing this loss functional. I want to use that because I want to-- this is a lot simpler, and it doesn't have all kinds of extraneous terms and things that I need to deal with. And so I want to get to the point where I can just operate on this, so that I can give you some mathematical facts when I draw the pictures. That's why I'm doing it. Yeah, you could do everything in terms of the original equation. There's no harm or foul for doing that. And yeah, you may prefer to do it that way. Cool. And we're using the fact that the constants don't matter and all the usual stuff when we go to minimizing. OK, so let's see our first building block. There's a picture coming. I don't know why I think this will not pique your interest, but I'm still going to tell you. There's a picture coming that I think is very fun to draw, that uses all the building blocks together. And I'll highlight for you when we get there. It's a real privilege to draw this ridiculous picture for you. And it's going to be really disappointing after this setup. So I'm very excited for you. That like maximizes my enjoyment, OK? So here, I want to think about the first building block that we're going to draw. This is not a fun picture to draw, but it's OK. So suppose that we assume that every direction has independent and identical, independent and identical, correlation or covariance. So if I think about the noise ball, what does it look like? I'll draw 2d here. Oops, come on. Snap to. So what this means is that our model of Gaussian noise, if you think about it-- so this will be sigma is equal to sigma squared I, OK? So that means it's I. And the only free parameter is this sigma squared, which is a scalar. This is a scalar. OK? So how many free parameters are there? Precisely one, right? The structure is, the covariance is, sigma squared times I. So what does that actually look like? It looks like I have a point which can be centered anywhere. The Gaussians can be centered anywhere. They can have arbitrary means. But their covariance structure looks like circles. It's one circle, second circle. They should be concentric, the limitations of my notability skills. OK? There could be another one down here. OK, they all have the same. OK? So you may look at this and say-- so this is the pictures. These covariances are circles. But I was just trying to emphasize what this restriction means. And so that circle is just parameterized by two points, by two numbers and vectors, the center, the mu, and then its radius. OK? And that's what sigma squared is. OK? So in this situation, what is the MLE? What is the x? Well, we know mu because this is full rank. Why is this full rank? Because it's scalar times the identity. As long as sigma squared is bigger than 0, this is a full rank matrix, right? Rank is exactly n. So what is the MLE? Well, we're minimizing now over one free parameter. And it looks something like this, 1 to n xi minus mu. Except for when we take the transpose here, it's all times a scalar. So I'll pull the scalar out in front, and it will be minus sigma squared, OK, plus d log sigma squared. Right? So the determinant of this thing is the dimension times sigma squared. Well, the determinant of this thing is sigma to the 2d. And the log of that is d times log sigma squared. OK? Determine as a product of the eigenvalues. All the eigenvalues are sigma squared. How many eigenvalues are there? d. So I'm just saying in a very complicated-- unnecessarily complicated way this is true. Then I'm just taking the log and writing it in that form. OK? So just for notational purposes, let z equal sigma squared. What does this actually look like? Well, it's minimizing this thing is a constant here. This does not depend on sigma squared in any way. So I'm just going to call this c, c plus d log z-- min z. So just take the derivative. What do I get? Minus z over 2c plus d over z equals 0. And if I've done my job correctly, this is going to be z is equal to c over nd, i.e. sigma squared equals c over nd. OK. Just to write it to make sure it's-- just to unpack it so you understand, so you don't get lost in the weird notational pieces, this is basically what I'm saying. This is the estimate. OK? Pi minus mu, OK? Hopefully, this makes sense. What is it saying? It's saying average over all of the different variances that are there. Treat them as if they're, basically, nd of these things, that are estimates, that you're seeing sigma nd times. Average over them, and that's what you should get. So another way to describe the rule here is subtract the mean and square all the entries. Subtract the mean and square roughly, OK? OK, so what do you know at this point? Oh, please. Yeah, it's just a quick question. On the second line, where we write the min, aren't we missing a sum over i to [INAUDIBLE]?? Oh, sorry. So yes, these are part of the two things. So when this comes out, this should have an nd here. Sorry. Yeah, my simplification on the fly to simplify my notes did not make sense. That's a great catch. This thing is going to be c, and this sum is inside each of the n's. And so this should be nd. Sorry about that, wonderful catch. Awesome. Please. What is the [INAUDIBLE] of the sigma squared and the previous sigma? Yeah, so the idea here is we have some direction. And so one intuitive way to think about this is there's some radius that makes sense. And you know how to fit a radius on any dimension, right? So you know exactly how to do this if it were one-dimensional. And what this is basically saying is I collapse all the variance in all directions. And it's almost like I just lay them out in a line and average across all of them. And what I'm saying is that's not too surprising. This math is the correct way to understand what should happen. That intuition is not going to help you too much, but it at least will let you type check that it makes sense, and it has the right dimension. And what's great there, as was pointed out, was this nd is just averaging over all the entries, right? And the n has to come from averaging over all the entries. Is this still the matrix, right? So sigma squared is a scalar. So it's just going to be a single number. But then we're going to multiply it times the identity. So if you picture it, it's just going to be that number repeated on the diagonal. Right, correct. Yeah, please. This is a hypothetical situation when a covariance [INAUDIBLE] actually a scalar times an identity matrix. Right. So yeah, so and any model is always going to be a hypothetical situation. We're always talking about, what are we modeling here? And so what we're modeling here is let's imagine a situation where this would be appropriate. It'd clearly be perfect in the case where we had noise that we knew was actually the same magnitude in absolutely every direction and didn't depend on the underlying data. We'll talk about procedures that try and make us close to that in a later part of the lecture. But ignore that for one minute. That will probably never be true empirically for a variety of reasons. But this is a pretty good approximation. If they don't fluctuate too much, do you want to pay the expressive power of having to model all the different fluctuations in all the different possible ways, all the different ellipses that you could possibly have? And this is saying, no, if they have the even distribution, then this is an OK model. Cool? Let's see our second building block. There are not lots of building blocks. There are two. Let's see our second building block. All right, building block 2. So sigma is going to look like this. It's going to be diagonal, sigma d square. OK? Now, what does this look like in my head? These are axis-aligned ellipses. What do I mean? Well, they may differ like this-- oops. That's pretty good. All right, so this noise parameter is still parameterized by a mean, which we know how to set. This is clearly full rank. Why is this clearly full rank? Well, because all the numbers are greater than 0. Its eigenvalues are all positive, so it's full rank. it's also, by the way, it's clearly positive semi-definite or positive definite because all those numbers are positive. OK, so it is a covariance matrix. And this is what it looks like. It's basically saying that the ellipses, the errors, are not correlated across multiple dimensions. They're basically independent on each of the dimension. And so the error profile is going to look like a flat ellipse. All right? All right, so this is basically saying, I have dimensions that differ wildly, but I don't care about how often they interact, right? That's roughly the model that you have in your head, right? There's a lot more variance in component 1 than component 2. That's worth modeling to me, but not at all how they interact. All right, so we're going to set zi equal to sigma i squared, as above, same thing we did above. We're going to minimize now over z1 to zd. These are all scalar values, all positive, too. Doesn't matter too much. Sum jd. Well, j equals 1. I'll erase that-- xi minus mu j squared-- this is now a scalar-- plus log zi. OK? And this is inside. The parentheses are inside this thing. I just write it just because it's the way it should associate. OK? Oh, that should be j. See if I made any other bugs. Probably. We'll get there. All right, so far, so good. What does this look like? Well, the reason I wrote it out like this-- so first, be clear about what we did here. This broke out per dimension. So really this is just d independent problems. Right? So what should we expect? Well, we can go ahead and compute this thing. But it's going to look like a problem we've solved many times before. j-- sorry, that's j. That's the typo there. j square plus log zj, which implies sigma j square equals 1 over n. Sum j equals 1 to n. Sorry. j is the dimension. i equals 1. xi minus mu i square. Let me make sure I have-- oh, mu j mu j. All right, let me write that again so that it's clear, since there were a bunch of typos there. Sum i goes from 1 to n xi j minus mu j square. OK, so what's going on here? The thing that's going on here is we have d independent problems. That means we have d one-dimensional problems. And this is exactly the estimate that we had for one-dimensional covariance. There's really nothing else going on. There are a couple of bits here that we're using the jth mean and the jth component. It's like every component was handled independently. OK? All right, we are ready to talk about our factor model. OK? So what is the purpose of me going through all of this? One is I want you to be pretty comfortable with going back and forth between, hey, what is my MLE? What does the structure I put in, and how does that change my estimate? OK? That actually is pretty interesting. You put in the different structure. It reduces the number of free parameters. And that allows you to estimate things with far fewer amounts of data. We're going to combine that now in a pretty interesting way, which is what the factor model actually is, our factor model. OK? And the other reason to give this is it's an example of a more sophisticated generative model than we've typically been used to. Yeah? Yes, can you go back to the previous page? Sure. You got it. Could you explain the graphical interpretation of this character here? Sure. So in general, a covariance matrix is an ellipse. You can picture that ellipse, where the different directions correspond to the eigenvalues, right? And so what's going on here is that instead, because we are putting it on the diagonal, that means that the stretch is only on the individual axis. So you have something that's axis-aligned. The major axes have to be aligned. So when would that be a reasonable model? It'd be a reasonable model if you thought there are different amounts of spread-- or different amounts of variance for every dimension, right? The previous model said, all the variance in any direction was, effectively, the same. There was one parameter. So that's why it was a circle. Now it says, there's different amounts in different directions, but I'm not going to model their interaction. In contrast, a general covariance matrix can tilt and say, no, the principle direction, which we'll come to when we talk about PCA, is in one direction. And I'm a little bit setting up for those calculations later, is why I draw these. So this says, I don't care about interactions between the features, very roughly speaking. OK, but basically, saying that there's [INAUDIBLE] in each direction or they're all the same? No, no, they're different. There's a different parameter for everything. So there's sigma potentially different numbers. That's why it's an ellipse rather than a circle. If they were all the same, it would be the circle. Cool. Awesome. Fantastic. So let's look at our parameters. There are going to be a lot of them. But it gives us a more interesting generative model to look at. And if you kind of wade your way through this, you understand EM. You understand how these models work. And hopefully, it gives you confidence that you're not looking at just one setting, Rd by d. It's going to be a diagonal matrix. OK? All right, so let's see the model and then the worlds. I don't know. It's not really that remarkable a picture, but I enjoy it. All right, Px of z, so we do the usual thing here, that it factors as a latent model. This is exactly the same, since z is our latent model. OK? By the way, u is in Rd. This is a linear transformation that we're going to learn. This is a diagonal transformation, hearkening back to what we just talked about. And here is the way x is distributed. x-- or sorry, z, start with z, is going to be some 0, 1. So it's just some random noise vector in Rs for some s smaller than d? OK? I'm going to call this s because I want you to think about it as the small dimension. So the latent structure is small. We want to pick the latent structure to be much smaller. d is 1,000, 10,000, a million, a billion, something like this. We're going to pick s as saying there's a small subspace that characterizes and classifies all of its behavior. That's like the compression. That's the bottleneck. Then we're going to say x-- and I'll walk through this slowly in a second-- is this expression, epsilon, OK? All right, so the model that generates x-- and I'll write this more compactly-- x is not just centered at the origin. It has some mean. z is going to be mapped from the small dimension to the large dimension by a transformation that we're going to learn. So imagine the sampling procedure is z gets selected. If you knew lambda, you would map it up into Rd. You'll shift it by mu, the mean that you're going to learn. And then you're going to put some epsilon noise here. OK. All right. Now, let's get the model for these things. So epsilon-- oops. Epsilon is going to be normal. It's noise, so it'll have 0 mean. And it will have phi as its parameters. This implies-- by the way, now we have all the information. We could also write x is distributed as N mu plus lambda z plus phi. Please. Oh, what is the physical meaning of s [INAUDIBLE]?? Yeah, great question. So let me just say it. So I just want to annotate this to the mean. So I'm going to draw what goes on on this, for an example, in one second. That will answer this a little bit better than I'm able to because once you see what's going on. But the intuition is, you're going to sample in the small space. You don't know where it is. That's your latent variable. That's your link to clusters. That's the thing that you say, like, I don't know what it is, but I know there's a small structure lurking out there. Then lambda says, from that small space, I'm going to go into the big space, right, the d much higher dimensional space. And lambda, I'm going to learn that transformation that says, how do I go from the small space to the big space? And then the big space is going to be the actual dimensions. So imagine I actually had these sensors, and they were getting temperature readings all over campus, and they were getting wind readings all over campus. Well, clearly, they're correlated, right? If I put a bunch of sensors all in a line, they're not independent readings, right? There's some sensor, and then there's some map that tells me, from the reading that I got, what should I expect at all the sensors with pretty high probability? You could imagine such a map existing? And so the lambda is that map that goes from the low dimension to the high dimension. We're just doing it in a very abstract way, and we're not specifying what those dimensions even mean, to start with. Please. This could be [INAUDIBLE] end rate. We probably don't want to set a greater than, for a variety reasons. But yeah, you'll see in a second what the conditions are when you go to solve it. Great question. There are some conditions that's lurking there. You need to be able to estimate. There's no free lunch, right? But yeah, it's going to be smaller than d. d is a billion, and n is 20. There exists some s where we could potentially solve this. That's the way to think about it, not we can solve it for every s. Awesome. All right, so let's see an example of this whole thing, where d is equal to 2, so not that high a dimension, but good for drawing. s is equal to 1, and n is equal to 5, OK? And our model is x mu plus lambda z plus epsilon. And let's see how the forward sampling works. So how does it work? One, we generate z1 to zn from n 0, 1. So what does this look like? All right, so here's 0. And we get-- let me see if I can draw this nicely. We get a bunch of samples. Oh, here's 0. Let me mark down 0. didn't look right-- is 0. So let me draw the density, actually. So the density looks something like this. This is the Gaussian and loosely interpreted artist's rendition of Gaussians. Then what I'll do is I'll sample. So maybe I'll sample this point first. And I sample over here, sample here, sample here, sample here. Those are my five points. How did I pick them? I don't know. I sampled. So this is z-- oops. This is z1. This is z2. I'm not remembering the order that I did it, and it doesn't really matter-- z3, z5, z4. OK? So I just generated those in one dimensions. All right, now two, what happens? So now so far, I've done this piece. So this character is handling this. OK, now I map by lambda. So let's suppose we've already learned lambda. So lambda is equal to 1 2. So what happens next? Now I get fancy. Take this, copy, paste. So now we need the x-axis back. So these things get mapped. Here's the line, 1 2, which goes through the origin. OK? What happens? So then these are all mapped onto the line. Oops-- so small, but good enough. This one's mapped all the way up here, down here, and down here, OK? So this point, for example, is-- this point is lambda z3. This point is lambda z2. So far so good? So this whole thing, now we're in the high dimension. So we've gone from our small dimension up to the whole dimension of all our sensors. Think in your head dimension OK? Now three, we add mu. What does that do? I'll put that in green. Oop. That's this piece. Copy, paste. Mu, let's say, is this vector here. This is mu. We add mu to everything, and that translates all our points now by mu. And there's one off the screen over here somewhere, OK? This point is-- oops-- mu plus lambda z2. All right? It's this point translated by mu. Then four, we add epsilon noise. And this epsilon noise is full dimensional, so I'll make this in purple. There's a character here. This is the equation that we're dealing with. What happens next? We get some purple stuff, and it goes like this-- whoops, make that a little bit thicker. It goes like this. It goes down here. It goes over there. But it's full dimensional noise. It lives in a d dimensional space. And that is our final expression. So for example, this character here-- oops-- this character here is mu plus lambda z2 plus epsilon 2. And epsilon 2 is the noise. And notice the noise is in different direction for each one. All right? So I want this. By the way, this is in the high dimensional space, in dimension d. OK? So what's going on here? We were talking about that sensor and temperature example. So just walk through. The z's here were generated in one dimension, let's say. That was the underlying true temperature, that you were going to see several samples of it, maybe at different times of the day, right? Because z was collected at various different times for different data points, right? We were collecting it many times in the day. Then we mapped, using lambda, to say, like, oh, if the temperature were the true temperature, or some hidden temperature was 50 degrees, then all of the other temperatures that are nearby are going to jiggle in a predictable amount, predictable amount meaning lambda maps them from wherever their value was to the high dimensional space, right? But the problem is, there's still some residual noise. That fit may not be perfect, right? We can't predict things. Noise means just stuff we don't want to model. And that's where this purple comes in. That's the epsilon jitter that's in the high dimension. Does that make sense? And that's our underlying model. And the data we see are the purple dots. So the data are the purple dots. That's what we actually see in our-- the data are the purple dots. OK? Just to make sure, it's x equals mu plus lambda z plus epsilon. All right, please? Oh, is it assuming that the-- actually, the-- so even though the sensors give us a big dimension d, which is really huge, but are we assuming that the actual thing that we want to model of, say, the actual temperature is of the dimension that is much smaller than that? That's exactly right, so much smaller than d. So what we're assuming is we have-- again, I think the illustrative example, which will get you most of the way there, is I put 1,000 temperature sensors in a row. So I get 1,000 numbers every time I measure them. But if they're all close enough, it feels like there should be one temperature, and there's a mapping from them. That mapping we're going to learn. That's what lambda is. Lambda takes us from that one true temperature to our guess at all the different temperatures. But we're aware that that's imperfect. And since we're aware that that's imperfect, we also allow ourselves some error, epsilon, in each one. Maybe one temperature sensor is damaged. It's dirty. Someone walked by it. Someone blew on it. Who knows? That's what epsilon models. Does that make sense? And so it's the small to big. And that's what allows us to learn this because we're saying, although there's this huge amount of free parameters, we only have a little amount of data. We have to make some compression. And so that's what the latent structure is doing. Please. I have two questions. So first is that [INAUDIBLE] is actually really [INAUDIBLE] assuming the major assumption of value that this is something close to-- it's linearly-- it's a lineral form of data just having a bit of noise in it that's [INAUDIBLE].. Yeah, so here we're making an assumption-- that's right. We're making an assumption that there is a small distribute-- there's a distribution that's low and that there's a linear map up into the larger space. You could imagine more sophisticated models that had a learnable, non-linear map into the larger space. And there are various regions that may be hard and limited data regimes. You may not have enough data to learn that piece. The second question pertains to s. So that s is equal to 1. Right. And therefore, it's easier to imagine what's actually happening. But what would be the case when s is [INAUDIBLE]?? Yeah, wonderful question, and we'll get into this a little bit in PCA2. And sometimes they're interpretable. Sometimes they're not. But let's imagine, when we come back to it, we'll have this language of principal components in about 15 minutes, that will maybe be able to give a little bit more precise answer. So I may punt there in a second. The idea is that s captures the important directions of variation. So if you imagine the temperature sensor-- so I have temperature, and I have electrical current that are there. And maybe there's also some wind thing. And if I combined all three of those, I would get a really good estimate. But I need an estimate of the wind speed at a location and all the rest. Then s would be some mixture, potentially, of those true directions that are out there. Now, often, that story that we tell ourselves is difficult to verify because we don't actually get to see the latent variable. And in fact, we can get away with making a much weaker assumption, which we'll make in PCA, which is basically, that there exists some low dimensional space that captures our system. And that turns out to be a fairly robust kind of assumptions. It's not that we know there are only these three parameters. If we knew it was wind and temperature, and we were just learning the map, we would just feed that in, right? We would just measure that and feed that in. We're hypothesizing that there exists this small bottleneck. And that's what's going to allow us to learn in this setting. And to me, that's the really rich, pedagogical reason to teach this, is it forces you to think about, what can you really recover? This is more of a statistics view of this part of machine learning. But what can you really recover from data? OK, so US entity, [INAUDIBLE] s, you got to [INAUDIBLE].. Yeah, you're just saying there exists something small in your [INAUDIBLE]. Oh. You got it. Do you do this by-- Do you create this transformation for every data point? Wonderful question. So here, lambda is across all the different data points, this transformation. So if you look at this notation here, you'll see that z is generated once per data point. So there are n of those things and latent samples There are also n latent noise samples. Those are different for every data point that are there. But lambda is shared across all of them, as is mu, in the model. Now, if you knew something, like you knew there were k clusters, and there were k different maps, you could potentially fold that into your model or k different means. But this is the model we're looking at now. It doesn't necessarily need to be that way, but for now, that's where we're looking at. I have one more question. Do you-- so from my understanding is that we start from z, and then we work our way from generated z first. Correct. Then work our way to the data. But how do we estimate, like, s? Or do we start from [INAUDIBLE]. Oh, great question-- I get exactly what you're saying. So when we went up here, remember in this model-- so one of the messages is we start with this model. And in the generative way of viewing things-- the real reason I like to keep this in the class in a very fundamental way is because it forces you to think about what is actually in the model. When you see the EM thing, it's in a similar situation, where you're like, oh, there's just this one hidden aspect to it. This z is latent to us. So the part of the model is you pick s because it's one of the parameters. So you have to tell me ahead of time, I want you to model with a size s model. And then you run this whole thing forward. And then our goal is, among all those things that could have gone forward, what are the most likely settings of the parameters in that model? So we're not learning s, in some sense. We'll talk about, in PCA, when we're learning a linear subspace, we can, effectively, look at a particular measure and see if adding one more dimension would be there. And it'll come with some caveats-- great, great questions. Awesome. All right. OK, so we need one technical tool to make this whole thing go. Let's do some technical tools. I'm not sure that these are super useful to prove here, but I will happily point them out to you as we use them. All right, so here, we're going to use this notation. You probably have seen this before. If not, don't worry. We'll review it right now. x1 is at Rd1. X2 is at Rd2. And d equals d1 plus d2. OK? So this is just a nice way of partitioning. This is because we're dealing with something that's linear. And we can have this kind of block structure. OK. We can also do this for matrices. And that will allow us to state some theorems just a little bit more concisely. Sigma 1, 1; sigma 2, 1; sigma-- oh, so this should be 2, 1. Sorry, one is backwards; 1, OK, I hope I didn't do something else. All right, so this is d1. This is d2. This is going to be d1, and this is going to be d2. All right. And sigma ij, if I've done my job, should be-- oh, that's why I did it the backwards. That makes more sense. Sigma d is going to be dj-- hold on. Let me actually change the numbering just to make it consistent with the notes. Sorry. This doesn't really matter conceptually, but just to explain it. OK, 1, 2. All right, so the point is, I can factor this in some way. And it makes sense to multiply a matrix in an obvious way if it's compatible with the block. So if I multiply this matrix here, x times sigma, I can write it in terms of these blocks. Hopefully, that's clear, right? And it would be sigma and that would be what I would want to put there. All right, this is a very widely used notation. And it's going to let us state two facts about Gaussians that are really helpful. If you're not familiar with it, just take a look at it and see. So the first one is the marginalization identity for Gaussians, x2 Px1 x2, OK? For Gaussians, this has a nice form. This is not true, in general. This is why we love Gaussians so much, really, is that it has this really nice form. Px1 is going to be equal to a normal. It's going to be distributed like a normal mu 1, 2, sigma 1, OK? This is called marginalization. It basically means I can grab the mean that I want to grab and the covariance I want to grab when I marginalize them out. And this is because Gaussians have this nice property about being independent in these directions. OK. This is a much nastier statement, fact two, which we will use. Px1 conditioned on x2 is also Gaussian-- pretty remarkable, if I'm going to be honest. Not a lot of distributions are closed in this way. What I mean is closed, if I do an operation like conditioning, do I get back the same distribution? That's actually pretty unlikely, but Gaussians, it happens. And so again, it makes some of our calculations easier. This is marginalization. This is conditioning. OK? And so what are these two values? This is going to be equal to mu And I'll put up notes, and you can look and see the notes do this x2 minus mu. This looks super mysterious. It's actually not, but it's not probably worth going into too much detail. If you remember your matrix inversion lemmas-- if you don't, again, don't worry. These are not super conceptual, important details. It's just how this works. You can go through, and I can gladly upload a proof for you, if that makes you feel more solidly grounded. If not, you can just use this. OK? And these formulas, when I say it's not conceptually important, you may think that's a cop out. Maybe it is. But really the reason is it doesn't matter to me, because there's only one way these formulas make sense. The types all work out you. You have d1's and d2's, and you're multiplying them in the right way. Otherwise, a formula is wrong, OK? So this is the matrix inversion lemma. If that helps you, great. I'm happy to prove it. The thing that I care that you take away is that we have these formulas, and we can use them later. So when we do a conditioning step, we can use them. OK? When we do a marginalization, we can use them. OK? And I realize this is impossible to appreciate at this moment probably. But it's relatively rare that we can go from a distribution condition on it and get back a distribution, that we have the same what are called sufficient statistics. But it happens here. Just don't expect that now. Please. So I think through both of these facts, we are assuming, right, the independencies between not just within different data sample, like the mean data sample, but also within the features of the data sample. So I wouldn't look at it through that lens. These are mathematical facts about Gaussian distributions. These have nothing to do with data, per se. This is just true about any kind of wild Gaussian that you would want to meet. And so you would use it that way. We are going to show, in one second, that crazy model I just showed you can be written in a nice way, using these things. So how do we use it? So remember we have two variables, n0 sigma, right? And this is since epsilon of z, the expected value of z is equal to 0, and the expected value of x equals mu. Now, we don't know sigma yet. We just say there's a covariance out there. We're going to derive what it is in one second. But the point is, is that whole factor model, basically, boils down to some gigantic Gaussian. And that's going to be really helpful to us. The problem is, this form, this is going to be data. This is going to be hidden. And so we're going to have to deal with marginalization and conditionaling and all the rest of it to be able to recover it. But the point is we can write that model super compactly. Now, I think if I wrote that first, probably you wouldn't be super happy. Maybe you'd be really happy. I don't know. Maybe you're happy folks. [INAUDIBLE] Which one? This one [INAUDIBLE]. And again, you can type check these things. Just make sure it makes sense. It's got to be a d by d map or an n by d map for everything to make sense and all the types checked out, if there are typos, which I'm sure there are. OK. So now our job is to try and figure out, what is sigma? And by the way, this is kind of remarkable. We went through this pretty elaborate discussion about various different models, how the model worked. And lo and behold, it's just some nice Gaussian. So because it's block, 1, with itself. Well, what's the outer product of z with itself? It's just the identity. Why is it the identity? Because that's the way we sampled. Remember we sampled in a low space? This is using this fact-- whoops-- is using this fact, right? So expected value of zz transpose, that is exactly I. OK. What is sigma 1, 2? Well, it's going to be the expected value of z times x minus mu transpose. What does that actually equal? Well, that's going to be the expected value of z z transpose, right? So x minus mu equals what? It equals lambda z plus epsilon, right? That's just the model. I just subtracted off the mu on both sides. So it's going to be z z transpose lambda transpose plus E z epsilon transpose. Now, these two things are independent. Whoops. z and epsilon are independent, so boom, this goes to 0. This we just concluded. I'm sorry. This we just concluded was the identity. So this equals lambda transpose. And sigma 2, 1-- whoops, I'll write in a proper-- in the other color. Sigma 2, 1 equals sigma 1, 2 transpose because it's a covariance matrix, positive sym-- it's symmetric. OK? The last one-- or you can just compute it. Take my word for it. But you don't have to take my word for it, is this one, x minus mu x minus mu transpose. We just use this formula again. That's going to be the expected value of lambda z plus epsilon lambda z plus epsilon transpose. We have some multiplications to do. We know that all the cross terms with epsilon are going to cancel out. So this is going to look like-- and this will maybe look mysterious. So maybe I'll go a little bit slower here than I wrote in my notes, epsilon epsilon transpose. OK so why? Notice you're going to have an epsilon times this term and the transpose. But the expected value of epsilon is 0. So those cross-term is going to cancel. The only one that's going to remain are this one and this one, where the epsilons are multiplied by itself. So both cross terms fall away. What does this equal? Well, we just saw this is a random variable. This is a deterministic parameter of the problem. This is going to be lambda lambda transpose, and this is going to be plus the phi. Why did that happen? This phi is the model-- sorry for the scrolling, so please avert your eyes if it makes you nauseous-- is this character. Sound good? So let's write the whole model in summary. And again, you can check the types on this thing. Make sure I didn't make some silly mistake. OK? And we're good. So now it's in this nice Gaussian form. And oh my gosh, I claim you already know how to solve this. E-step-- what is Qi of z? I won't go through the whole EM algorithm, you remember is Pz of i, given xi and theta. What do we use here? We only have two rules. Conditional. Yeah, use the conditional. That's all we've got. Use the conditional time step. Well, it's a normal distribution, where we filled in the x's, and we're marginalizing out after seeing the x's that are in there or conditionally, on the z's. We have closed forms. You already know how to do this. OK? So what is the summary here? We saw here, this factor analysis structure that allowed us to have a lower dimensional space, have this elaborate sampling and moving around, pretty wild set of generative models, basically boiled down to a one-liner to shoe and horn into the M algorithm. We had to use some fancy tricks with how you deal with normal distributions. If you didn't use those fancy tricks for normal distributions, you would have to do some empirical extra work, right? You'd have to understand, if I had a distribution that looked like blah, and I wanted to marginalize or condition it, what would I do on top? Yeah, please? So I'm not sure I really understand [INAUDIBLE] forms for the next step. Oh, because once we have this, these are just like-- so think about what happens once we have the estimates for the z. Then it's just conditioning on the z's and removing them. And we know how to estimate the parameters of-- this becomes just a standard. Oop, where's the formula? Sorry for the mad scrolling, the crazy scrolling is here. Once you know z, you have a guess of z, then this is just fitting a Gaussian with an unknown mu and sigma. And you need to figure out what the sigma is that you're fitting. But you know how to do that. So I guess this is something we've done maybe k lectures in a row. And I agree with you. It is abstract and weird that that falls out. And I want to say it in that abstract and a weird way so that you're not afraid of it. Because if you want to go into these things, you want to use these things, it'll look very strange. But once you get it in that form, it's just a plug and chug kind of mentality to get all the results from things you already, derivatives you already know how to compute. Cool. Oop, go ahead. Why do [INAUDIBLE] sigma [INAUDIBLE] as expectations [INAUDIBLE]? Oh, so that's just the definition of covariance. So because I know the distributions, I want to compute what sigma must be. And I'm just trying to tell you, the reason I wrote it that way is to tell you there's nothing else going on. In the model, I'm just computing the expectations. I know the covariance of the individual objects, and that is telling me that I can write it in this form. Cool. All right, let's use and we should be in good shape. All right. PCA-- all right, so PCA, I'm going to draw this little diagram to explain where we are, principal component analysis. So we looked. I want to fill out this table. You say, why do you want to fill out the table? I don't know. It's just how I do it. I like that it fits. Also, you should know it. It's an important algorithm. It'd be weird if we didn't teach it. Although, probably much of you have seen it before. If not, don't worry about it. So one of the things that's inside machine learning, which is really interesting historically-- I won't bore you too much with the various tribes of machine-learning theory, and how we got where we got, and who fought with whom at which conference, although it's kind of funny because it's so bizarre. But there are two schools of thought of doing machine learning and at kind of a broad level. It's kind of the Bayesian school of thought, which is everything should be written down as a probability distribution, maximum likelihood. We write these priors. We use conjugate priors. We crank through the model, and we have this forward story. And then there was another camp that's more the frequentist style world, just from the stat side, that was like, we don't want to use probabilities. We just want to have these nice estimates. They also like maximum likelihood. They also like Bayes' rule, but they differ in some ways of how you approach modeling and what a valid model is, how you know to trust the model. OK? It doesn't matter at all. The point is, almost everything in machine learning has these two different approaches. And I'll say, at one point in my career, early in my career, I built a system which was very much in the probabilistic camp and then became disillusioned with it and moved to doing something else. But there are merits to both approaches. PCA is what we're going to talk about. So that's just historical context. I just want you to know there are many ways to model a lot of these different problems. And we're going to talk about this one here. And I want you to explicitly contrast it with factor analysis, which is the probabilistic version, kind of a PCA. In the same way, k means in GMM have a parallel structure. You saw k means. We did GMM. They had a nice kind of rhythm of how they were solving the underlying problem, so it was nice to put them together. These don't have that nice rhythm, but they have something else that is nice to see. All right, so let's see PCA. Please. [INAUDIBLE] Thank you. Yep. OK, so we're going to be given pairs here. And this is a weird example, but it's something that makes, hopefully, the main point clear. OK? So imagine someone gave us a data set, and this data set had a bunch of cars on it, and the cars are going to be rated on highway miles per gallon and city. OK? And these are gas cars. These aren't electric cars. So they're scattered all over this. And I actually went and got the real data. It's not that interesting, but there's a couple that are substantially better, substantially worse than the line. OK? Now, imagine these clusters up here. Just to give you-- complete the story, these are maybe hybrids. Whoops. These are hybrids. Maybe these are SUVs. They have bad gas mileage or trucks, or something. And these are our economy cars. I don't clear. OK? Clear what the visualization is saying? OK. So we ask a question, which at first, seems strange, but it's a question we could ask nonetheless. What does it mean for a car to have a good miles per gallon? Now, some cars are going to be better in the city. Some cars are going to be better on the highway. We come in a single direction or a single kind of a way that says, what is the component of principal variation for good miles per gallon? And that's what we're trying to ask. And by the way, PCA is typically employed in these settings, when we don't know too much about our data, we want to visualize it in some way or get some understanding. Right? Modern methods incorporate a lot of its ideas, so you don't need to run it, usually, as a standalone ahead of time anymore. They worry about these features and things, but it focuses on one thing I really think is great. So how do we deal with this? All right? So the first thing we do is we take our data and the PCA worldview, and we center it. OK? OK. Now, what does it mean center? We compute mu, which is equal to And then we subtract mu from every point here to get the center of the data set at 0. It should be balanced in all ways. This is just to make sure that the scale, the raw numbers, don't matter. Because we're going to talk about geometry. And also, we're going to use some ideas from linear algebra. And linear algebra likes things that go through the origin. We like linear maps. Linear maps go through the origin. So we want to-- the origin has a special role in linear algebra. This is still highway. This is city. OK? Now, if I look, and I just eyeball it, roughly speaking, it feels like the component of variation is something like this. Right? And what I mean is, if I look along that direction, that tells me the more I go along that, the better the highway mileage is, or some variation that's around that. But that's good miles per gallon, and below, as you go, those are slightly worse. So along this line is kind of the principal direction of variation. Doesn't capture everything. There's this character way over here. There's some character way over here. There's an automated way of kind of denoising that as data sets and other things that I'm looking at. And so what we'll talk about is this direction mu be a unit vector, is a component of principal variation. This is very intuitive. I'll define it formally in a second, mu 1, mu 1. OK? And if I look, I can still describe all my points-- this is just linear algebra-- by something that's orthogonal, and I'll call it u2. OK? Now, I'll construct u1 and u2, because I only care about their directions, to be unit vectors. OK? And so you can think about u1 as how good is the mile per gallon? And u2 is explaining its first variance, right? If you thought about it as probabilistic, it would be the first direction that they vary, a core varies. And then, what's that second order effect? You can imagine doing this in higher dimensions. All right. Formally, we can write xi equals alpha i 1 times mu 1 plus alpha i 2 mu 2. And these are scalars. This is just linear algebra. Please go ahead. You had a question. So [INAUDIBLE] principal? Yeah, we're just eyeballing it for now. I'm saying that it looks roughly correct. We're going to find, how do we find that line in just two minutes. So how can we know that [INAUDIBLE]?? So once we write the definition, let's come back. And then hopefully, your intuition or my intuition will be a little bit closer, right? But yeah, that's a great question. You're thinking exactly the right way. Why is that the right line? Why is this maniac drawing that line? Right? Now, the thing is, is we may only decide, by the way, to use this as dimensionality. We may only keep this component. OK? Because it explains more variation. And that's going to be how we define it, the variation. Oop. OK. So what we want to do here, just so you're clear-- so here, I drew it from 2 to 1, just as I was drawing before. But think about if I had thousands of dimensions. You can't possibly visualize thousands of dimensions. It's really tough. And I want to get it down to, say, 2 or 3 or 5 or 10. Right? I want to find those principal variances, OK? And so PCA also functions as a dimensionality reduction method. So here, we show just a two-dimensional plot. But imagine I gave you the cars have thousands of numbers, and you want to get a very succinct description of them as just five numbers. Or I gave you 10,000 sensors, as we were talking about before, and you wanted to figure out, give me three numbers which really describe every different individual sensor and its variation from that number. It's very similar to factor analysis, right? It's looking for that low-dimensional subspace that you want. In fact, it is formally a low-dimensional subspace. It is a dimension of dimension 2 or 3 or k underneath the covers. And we'll come to that in one second. OK? So at this point, this is intuition. You should have more questions than answers. Let me state the algorithm that we're going to do, the preprocessing that we're going to do. And then you'll see why some of these variation things are. Please go ahead. Well, I think what you are ultimately proposing is that yes, xi has the number of principal components. But certainly, most of them are noise except for the initial two. That's-- what are you saying? Great question. So imagine I gave you these x1 to xn that live in some high-dimensional space d. Then what I'm going to try and do is find a smaller number of directions. But those directions are not necessarily aligned with the axis, which means they're not component 5, 7, or nine. There's some mixture of all of them. And so u is not aligned with an axis, right? It's some mixture of the highway direction and the city direction. And I'm going to find that direction, and then I'm going to say that's the direction that the data set varies most in, there's the most spread in. So I want to keep where you are on that line. I'll project you on that line, then maybe the second, the third. And that's going to be a three numbers that best capture you in the data set, if you're an individual data point. And [INAUDIBLE] noise? Not noise, but I'm just not modeling them, yeah. Please. I guess what I've been [INAUDIBLE] quadratic relationship on [INAUDIBLE] instead of [INAUDIBLE] relationship shown there? Yeah. How does it capture the direction? Wonderful question-- so we'll come back to that in a little bit. Effectively, what we're going to do is we're going to capture some variation that's there. So there may actually be some of these hidden correlations that are underneath the covers. And we won't model them well, or we may only model them well in different pieces of data. There are methods, if I know that there's a nonlinear relationship between a lot of the different elements of data, that I can capture. Here, we're assuming there exists a linear subspace. And that's a fairly robust assumption. So people do use more advanced techniques, like t-SNE and UMAP, which do have, actually, the assumption that there's some very compressible nonlinear map that explains what's going on. But PCA is kind of the first and the cleanest to describe. Awesome question. OK, so we center the data. By the way, just I'm going to review this in one second. I'm just trying to write so we can get through this. Mu, we know what it is. We may need to re-scale the components. Why is this? Well, what if one component was miles per gallon, and another was feet per gallon? They're just different by a factor of 5,000. There would be a lot of variation in the feet per gallon, right, not the miles per gallon. The numbers would be different. So not only do we want to center them and get them both centered around 0. We want to re-scale them so that they have exactly the same width. And what we'll do is we'll just divide by the variance, by the sample variance in that direction, so that all the numbers are roughly normally distributed, you should think. OK? So we're going to assume that the data are preprocessed. If you actually do this method, you should do the preprocessing. So let's see PCA as an optimization problem, All right. Oops, monkey business-- I'll live with it. OK. All right, so here's what we're trying to solve. And this will get back to some of the questions that were asked about, hey, why do we think this is a principal direction? We need a little bit of math before we can make that precise x, OK? So what we want to solve-- what we have is we have some direction u1. We have some other direction u2, which is orthogonal. This is mu 1. We have some direction mu We have these are unit vectors. Mu i equals 1, so unit vectors. And they're orthogonal. Mu i dot mu j equals 0-- or sorry, equals delta ij-- sorry-- orthogonal. OK? So how do you find the closest point on this line to x? Well, this line, just to be clear, can be parameterized as t times u1 for t element of R. Right? It's just a line, one-dimensional line. And so to find alpha, how do we find alpha above closest point? That's the following expression. Alpha 1 equals the augment over alpha of x minus alpha mu 1 norm square. OK? All right, so hopefully, this makes sense. If I want to find the closest point, intuitively it should be on the line that's orthogonal here. It's not a very great drawing, but that's where it should be. It should be projected onto this line. And we can minimize over the whole line by minimizing over alpha. We do the standard thing here. We compute the derivatives. We multiply it out. All right, so this is equivalent to, when we multiply it out, argmin x square plus alpha square times mu 1 square minus This is argmin over alpha. This doesn't change. This is a function of the input. This is 1. So when we go to compute the derivative, what do we get? We get 2 alpha minus Or said more simply, alpha equals 0, right? This is the gradient with respect to alpha. So alpha equals mu 1 dot x. This is saying the dot product in this direction tells us exactly the closest point. This is just differentiating this expression with respect to alpha. OK? All right, now, you can generalize this to any set of directions, orthogonal directions. So if I have mu 1 to mu k element of Rd and x element of Rd, how do I find the closest point to the subspace span by mu i? Here, we're going to use the fact that mu ij equals delta ij. Well, it's going to be the following-- argmin alpha 1 alpha k x minus sum k equals 1 to n-- or sorry, j equals Now expand that out again, exactly as we did before, and you'll see that alpha i is equal to mu i-- or ui dotted into x. This is the inner product, if you like. OK? We call this quantity x minus sum alpha j uj. Sorry, this is uj, uj square to k. OK, the residual. OK, so we have everything we need, rushing a little bit here. But so the point I want to make is, if you have a subspace, and you want to find the closest point in that subspace to x, that's the same as minimizing over all the possible directions in the subspace this distance squared. Distance squared makes the computation easier. Then alpha i, if you solve it, is going to break apart. And the reason it's going to break apart is, when you multiply it apart like this, you're going to get products of mu i and mu j. They will all go away. So all you'll be left with is a bunch of the alpha i's and alpha j's that are multiplied in exactly the same way as the 1dks. And you should check that. And if you're rusty on this, this is from your linear algebra class. And I'm super happy to write it out in more detail, if that confuses you. But for now, hopefully, we can just move through this. OK. This thing here is the residual. That says, how close am I? What's my distance from x to the subspace? And that's going to feature prominently in what we do next. Please. So this subspace is [INAUDIBLE] approximates this whole [INAUDIBLE]? So what we're going to try and do is we're going to try and find a low-dimensional subspace. And I'll describe how we're going to do it next. And PCA, when we're talking about, what's the principal component of variation? We're going to search across all the directions when we're looking for the principal component. And we're going to say which one of them has this-- for example, there are two ways we can find PCA-- but has the smallest residual or will be equivalent, maximizes the amount of alpha, the amount we projected onto the subset. OK? So there are two ways you can do this. So we can find PCA by one, maximizing the projected amount. That means trying to make alpha as large as possible, right? So if you give me a subspace, which is a section of vectors, I can compute the alphas by solving this problem. I want to compute something so that a direction-- so that for almost all the data points, my alphas are as large as possible. So getting back to the earlier question about, why did I say that direction was the principal component of variation? It's because if I put it in that direction, I was explaining the most, and the residuals, the errors, were small. Duly, these residuals for every point are as small as possible. And we're going to minimize those in one second. Yeah, go ahead. Couldn't we just normalize the [INAUDIBLE] the data as the preprocessing [? stuff? ?] We did. So wouldn't that make the residual on, essentially, the same [INAUDIBLE]? Awesome question. No, not necessarily, right? Because, for example, think about even if we normalized it. There may be one direction that explains, for example, the temperature, how far it is away from the heat source, back to my 1,000 temperature sensor example. And there's one direction that explains that. But there still could be variations in other directions that are not captured by any of the process. And so they would be orthogonal to that. So it's not necessary that even though all the directions, on average, are normalized, that for any particular point, if there's a correlation, would be the same. OK, so we actually normalize all the axes with the same scale? So all the axes are put into the same scale by taking each individual component and computing its sample variance and multiplying by a diagonal matrix. Please. When you're taking alpha, which-- how do we take the [INAUDIBLE] just is between the line and [INAUDIBLE]?? Or is it [INAUDIBLE]? Great question. So here, alpha is basically t. So it tells you how far along the line you're going to go. So alpha is constrained to live on the line. So when we minimize over alpha, we're minimizing this coordinate along the line. And that's well-defined, right? There's a closest point here, right? We know that intuitively from undergrad, whatever geometry we took, right, that there's a closest point. But this proves it, right? Awesome. Cool. Please. [INAUDIBLE]? Yeah, so it's either you want to capture as much of your data as possible. And in the linear case, it's either you want to capture as much as possible or minimize the residual or turn out to be equivalent. Because here, if you look at it, because these are unit vectors, making alpha j's bigger-- if I had two sets of u's-- [INAUDIBLE] --and I'm comparing them, if the alphas are bigger for this one, on average, than for this one, then that says that they capture more of my data because they're a projection. The alphas are defined by the u's. Yeah. All right. So let's find the principal component of direction, right? So what this boils down to-- I'll just do one of the methods, the maximization one. You have to do the other one in your homework, by the way, so I'm-- one reason to pay attention, is this. It says, look over all the directions-- sorry, u. OK? So it says, look over all the directions that are out there, dot them into the data set-- this is alpha; this is the alpha we were just talking about-- and try and maximize the one that captures most of it. So if you imagine me sweeping that line across the earlier data set, where we're looking here, here, and I'm sweeping that line here, here, here, all from the origin-- sorry-- go ahead, here, here, clearly this captures much more. The alphas are much larger than they are here. Just take the dot products. Right? Because it's just projecting onto a relatively narrow band. It doesn't capture as much spread. I want more spread. It's quadratically more spread. OK. So how do we solve this question here? Well, we need a couple of facts, OK? Maybe we will pick this up in a little bit. Yeah, I think that's probably best. So ask me any questions about, in particular, this spot. We'll come back about how to solve this kind of equation in a second. Any other questions about this piece? Yeah? Will you go back to the first line? So essentially, this point [INAUDIBLE].. So what you describe, maximizing [INAUDIBLE] subspace, you're just trying to ensure that whatever [INAUDIBLE].. Yeah. So maximum [INAUDIBLE] data points? [INAUDIBLE]-- Exactly right. [INAUDIBLE]? They capture the most of the data. So think about when we were going back to our earlier example. This is important. We were going to have u1 and u2. That was our basis, right? That's formally what it's called. We want to capture. So it turns out that the squares of alpha 1 and alpha 2 sum to the square of xi's components. Their norms are the same, OK? That's just a mathematical fact. Because this is what our basis is. So the point is, is we want to capture-- of these two, we're going to throw away alpha 2, if we only are allowed to compute one component. So we want the u that we select to be the one that for most of the data, captures some relevant information, right? If we found something like this direction is mostly orthogonal to where-- the line that the data lives on, so almost everybody is going to have a projection that's really small. The data, when I project onto the line, this is going to be clustered really, really tightly together. Whereas, if I project onto this line, it's still going to be nice and spread out. And this mathematically just says, it corresponds to either making sure that the alphas overall are very close to the x's, so they're larger, on average, or that the amount that I lose-- this is the residual, is the amount that I lose from the other directions, is equivalently small. And for PCA and Euclidean space, this is the case-- not relevant for this class. But there are other kinds of geometries where that's not true. This happens to be true for Euclidean geometry. If you decide to take non-Euclidean geometry, it's a great course. What is this superscript [INAUDIBLE]?? Oh, xi. Let me just write it again. Nice catch. I's don't show up very well with this pen in my handwriting, which is criminal, since we use them so frequently. Just assume it's an i. But that's all that goes on, right? So think about what this equation is saying, what this optimization is saying. It says, pick over all the u's that are there. We're normalizing. We'll come back to why we normalize. It doesn't really matter too much. But what we want to do is explain as much of the data as we can. OK? Awesome. So next time, we will cover a little bit of eigenvalues and how we solve this. This is an eigenvalue problem. And we'll cover the Cocktail Problem as well. And you saw a little bit of PCA. Thanks so much for your time and attention. Talk soon. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Bias_Variance_Regularization_I_2022_I_Lecture_10.txt | So I think I just spend like five minutes, just briefly review on the backpropagation last time. I think I was running behind last time. So I didn't have time to explain this figure, which I think probably would be useful as a high level summary of what's happening. I'm going to omit all the details. So I guess I'm drawing in this way. Like this is the forward path. This is how you define network and the loss function. So you start with some example x, and then you have some-- I guess, this is a matrix vector multiplication or matrix multiplication if you have multiple examples. But this is a matrix vector multiplication module, and you take x inner product multiplied with w and b. And then get some activation, the pre-activation. And you get some post activation. You take some matrix vector multiplication. And you get-- I guess I'm using matrix multiplication, but actually it's matrix vector multiplication. And you get to activation. And then you do this. This Is. How you define a loss, how you define the model. I guess the output of the model, I think, last time we used tau, which is the output of the model. And then you have something that defines the loss. This is the so-called forward path. And in some sense, you can summarize the backpropagation in a way like follows. So basically, if I draw it, so this is, of course, what you really do is you implement this in computer. But if you draw it, in some sense, you are doing it in a backward way. So what you do is you say you first compute, if you look at the flow, the data flow or the process of the backprop process. So you compute the loss with respect to the output first. And this is often very easy. This is like just take the-- because the loss is something like y minus tau squared, times a half. And this one is just a very simple formula. And then you compute the derivative of the loss with respect to-- here, I only have three layers. And then you compute the derivative of loss with respect to z2. And then you compute the derivative for loss with respect to a1. And then something like this. This is the order of the computation you want. It's kind of like you are accessing this network in a backward fashion in some sense. But how do you do this, each of this arrow? So this is by the lemma that we discussed. So I think we have three lemmas or three abstractions. So and each of these arrow is using one of those three lemmas. And now you can see what this does kind of like lemmas are for. Those lemmas, basically, are saying that if you know dg over the d tau, how do you compute dg over da? And there's another lemma, which says that if you know how to compute dg over da, how do you compute dg over dz? And all of those lemma is about this kind of relationship. If you know how to compute the derivative with respect to the output of some module, suppose this is a module, tau is the output of this module. So if you know how to compute dj over the output of the module, then you want to know how to compute the derivative with respect to input of the module. So all of the three lemmas are doing this list. I'm not going to the details because we don't have enough time to review again, but that's the basic idea. And also, there's another thing, which is like this is only about the derivative with respect to activation. You can also compute a derivative with respect to the weights, right? So if you know this quantity, then-- I think if you know this quantity, then how to compute the derivative with back to the last layer weight. And if you know this quantity, then from this quantity, you know how to compute the derivative with respect to w2. And from this quantity, you know how to complete a derivative with respect to w1. And also the same thing for b's. And this kind of-- the last row, this quantity, right, so they don't depend on, for example, after you get this, you can convert this, right? And after you've got this quantity, you can come up with these two quantities. But this row, the derivative with respect to activations, you can only do it sequentially. You can not say, you compute this before you do this. So this arrow is kind of the orders of the dependencies between these quantities. And each of these arrow is basically done by one of the lemma that we discussed last time. Each of the lemma is kind of dealing with this. Any questions? This is just an extension of the last five minutes of the previous lecture. I didn't have enough time to elaborate on this. OK. So good. So now in this lecture, and the lecture afterwards, we are talking about, I guess, a few concepts. One concept is called generalization, which is the main point of this lecture. And also, next lecture, we're going to talk about the concept of regularization. And next lecture we also talk about some of the practical viewpoint of ML, like how do you relate to your model, what you have to do in this whole process, right? So like you start with the data process, and then you have to tune the model, and then maybe you have to go back to change your data, so and so forth. So basically, in these two lectures, I think we're going to discuss this concept. I think that generalization is probably the main thing that we are talking about here. So generalization, as you can see, as you can guess, it's really just about how well your model is performing, and syntax examples. So we're going to discuss how do you make sure your model can also generalize to unseen test examples. So far, we only talk about training. So we have some examples, which we have seen when they are training data sets. And we fit some model on them. So now, what we care about whether this model will work for future unseen examples. And we are going to discuss a bunch of concepts, the balance variance trade-off, which is a kind of a principle when you think about how test error changes as you change model complexity. And we are going to talk about some of the new phenomena people have found in deep learning, which is a little bit different from the classical understanding. OK. So I guess, that's just a very high level overview. I guess, I use a lot of buzzwords. I'm not expecting everyone to follow everything. So let me maybe be concrete. OK. So I guess, let me start with some kind of basic notations and notions. So I guess, some basic notions, one thing is this so-called training loss, which you probably already know what it means. Training loss or sometimes, it's called training error. Sometimes, it's called training cost. I think, in this lecture, sometimes, we use the word cost. So they all mean the similar type of concepts. Sometimes, people use loss to refer certain kind of losses and error to refers to certain other type of losses. But from the purpose of this lecture, they all mean same thing. This is what you care about in a training. For example, if you care about the square loss, then the training loss would just be this. I think we have to write down-- we have written down this equation a lot of times. This is the loss function you care about when you have square loss. And other loss could be cross-entropy loss. It could be like MLE, the maximum likelihood estimator. I think that's actually one principle to derive the training loss. You derive the maximum likelihood estimator for data sets. And that you use that as your training loss. You use the active log-likelihood as the training loss. So this is basically, so far, what we have focused on in the last few weeks. So how do you get a training loss, and how do you really implement this, and optimize this, right? So there are many ways to optimize it. For example, in one of the lectures, we use the analytical formulas. So we have the GDA. We analytically compute what is the minimum loss, the minimizer of the negative log-likelihood. And all of the other lectures, we are using numerical algorithm to minimize this loss. So like for example, like in deep learning, we are using stochastic gradient descent. And then we have talked about Newton's method, so and so forth. But so far, everything we have talked about is this loss function when we try to find the minimizer of this loss function. Not exactly this one, but like other-- but it's always a loss function defined on the training examples. OK. So now, suppose you have obtained-- so suppose we have some parameter theta. So suppose we have obtained some theta, how do you evaluate whether your theta is good or not? So ideally, you want the model to not only perform well on the training data because for the training data, you already know the prediction. Why you care about letting the model to predict something you already know? So what you really care about is you care about you want to evaluate on unseen examples. So that's why the test loss is defined on unseen examples. And I'm going to use this notion. So suppose, say, you draw-- the process is that you draw some new example, x comma y, from some distribution D. And often, this is called test distribution. And then you evaluate what's the expected loss on this new test example. So you look at l theta, which is the expected loss of-- the expectation is over the randomness of this new example drawn from this test distribution. So what's important is that this x and y is not seen in a training. It's a new fresh example. And of course, here, I'm defining it as expectation. So actually, in place, I'm taking average over the entire distribution. So if you really want to do it empirically, what it really means is that you draw a bunch of examples. Maybe let's call it x1 test, test 1. You draw maybe n of this. These are not the examples you have used for training. These are new examples you draw during the test time. You draw them from d, iid from this distribution d. And then you evaluate the error on this set. And you evaluate average error or average loss on this set, on the test set. Because you know that if you evaluate on this test test, it's pretty much just approximating this expectation. You are just using an empirical way to estimate the expected value. It's an estimate, and the expectation of any random variable. One way to do it is you just draw multiple copies from the same distribution, and you take the empirical average. That's why the test set is a reasonable estimate for the test error. And just to be clear, these test examples, you haven't seen them in a training set. They are something you draw. You can draw them in advance, but you cannot let them to be seen in the training process. And there is a notion called generalization gap. So I guess, this notion, often people called-- this is basically talking about the difference between the test loss and the training loss. And oftentimes, it's not always true. But oftentimes, the training loss is less than the test loss. When you test, you find that your model is not as good as you thought before on the training set. Sometimes, it's probably a little worse. Sometimes, it's a lot worse. Sometimes, they are very similar. But generally, you shouldn't expect that your test performance is dramatically better than the training performance. And of course, in extreme cases, you can design the set, such that this happens. But I think in realistic practical situations, I don't think you should expect that at all. So it's often the case that this gap is either very close to 0, or maybe a slightly negative, slightly positive, or is much bigger than 0. So you want this gap to be as small as possible. So basically, in some sense, you care about two quantities. You care about the training loss and you care about the gap. You want both of these two to be small. If both of these to be small-- or if both of them are small, then the sum of them will be small. And that's why your test loss is small. That's the hope. Your hope is this both of these two are small. OK. So this one is something you can control in some sense. This is what you try to optimize for, right? But this one is harder to control because you don't-- because you cannot say, I'm going to find a theta, such as l theta is small. Because if you do that, empirically, you try to optimize theta such that the test loss is small, then you have to see the test data set. So that's why you cannot really easily control this because you are not allowed to test the set. Like you cannot choose your theta based on the loss. You can only choose theta first, and then you evaluate loss, but not vice versa. So that's why the generalization gap is something that is very hard to control. At least, you cannot directly control it. And the point of this lecture is to discuss in what cases you can somewhat know this is not too big. Like when you can hope that this is not too big? OK. And then before going to do more details, let me also define two notation-- two kind of like commonly used terminology. So of course, we are dealing with the case when l theta-- so we are mostly concerned about the case when l theta is too big. So if l theta is small, that's great. You don't have to worry about anything. So when l theta is big, the question is, what do we do to change it? Like if you observe that your test loss is very big, then what you can do to make it smaller? That's the question you want to study. And typically, when l theta is big, there are two failure mode in some sense. These are not supposed to be-- these are not supposed to be comprehensive. But I think, typically, you are in either one of these two failure mode. So one of the failure mode is called failure patterns. So one of the failure mode is called overfitting. And so overfitting, I'm going to discuss a lot about overfitting. But the first other bit is that the typical situation of overfitting is that the training loss, j, is small, but the test loss is big. So you have this big generalization gap. So you have a discrepancy between training and test. At least, that's not a definition for overfitting, but that's a very typical characteristic of overfitting. So for example, I guess, I'll probably draw this very often-- I'll draw this figure very often in this lecture. So suppose you have some x and some y. You have some data set. I guess, the one example I'm going to do is that I'm going to have some data set that lives very close to this quadratic function. The data are approximately quadratic. So x and y. So it has a one-dimensional problem. So given x, you want to predict y. And you observe some-- so you have a data set. For example, you have four points. So each point is like this, maybe this, and something like this, maybe something like this. You see these four blue points, and you want to fit a line to it or fit some curve to it. And the question is what curve you are going to fit? So suppose, you fit something crazy like this. Let me try to see what color I'm using for this. Sorry. One moment. Let me think about how do I use the color in a consistent way. So I guess if you fit-- I'm going to use black for the model you fit. So suppose your model you fit is something like this. I'm drawing something crazy. So this model is-- I intend to make this model to pass these points, exactly. So this model face the data, the forward training data perfectly. So the j theta is really small. It's kind of close to 0. But you can imagine this model shouldn't generalize to anything examples. So suppose you-- if you generate some examples, and you kind of believe that, and the examples also are kind of similar, like have a quadratic relationship, you generate something like this, maybe somewhere here, maybe some are here. Then you can see that the fitting to the right point becomes very worse. So much worse-- becomes much worse. So the test loss is very big. So this is a typical situation of overfitting. In some sense, you are saying that you fit the data very well, but you are overfitting in the sense that you only focus on the training data, but you kind of like forgot about the test performance. I will discuss why this will happen. I guess, you can probably guess, but this is so far, I'm just defining roughly what overfitting means. So it means that you are not-- you fit the training data, but you don't generalize. And another notion is called underfitting. An underfitting, basically, just means that you face something like this. Maybe let's say you fit this. Suppose this is another model you fit. So underfitting this means that both the j theta is also big. So even your model doesn't even do well on the training set. And that is basically means underfitting. So as the word suggests, that you are not fitting the data. And whether you are in the overfitting regime or the underfitting regime or in a nicer regime, depends a lot on different things. And one kind of decision we are trying to discuss today is that what is the right model complexity. So like what are we going to use? Linear model, maybe use quadratic, or maybe fifth degree polynomial, or neural network, so on and so forth. So we're going to discuss what will happen if you change your model complexity, and whether in what cases, you may underfit, in what cases you may overfit, and what is the best response. Any questions so far? And kind of like as a spoiler, in some sense, like we're going to discuss two-- we are going to decompose the test error, l theta. The test error is the test loss, and l theta. We're going to decompose this into two terms. Actually, I'm not going to show it, mathematically, because I don't think I have enough time to do that. But intuitively, you are going to decompose the test into two terms, which is called-- one is called bias. And technically, it's bias squared because the bias is defined as the square root of this term. So plus variance, So you're going to have this you're going to define these two terms and say that these two terms, if you take the sum of them, it will be the test error. And these two terms has this property that the bias is going to be an increasing function. So we are going to see something like this. The bias is going to be a decreasing function as the model complexity. I haven't told you what the bias is, what the variance is. I'm just giving you kind of like a spoiler on what kind of things we're going to discuss. So the bias is something like this. And the variance is something like this. So these are-- basically, you are kind of like trying to figure out the underlying kind of like mechanisms. So the mechanism is that if you change the model complexity to make it more complex, then your variance will be bigger and bias will be smaller. And your sum of these two functions, which is a test error, will be something like this. And then the best one will be something in the middle. So this is kind of the quick overview of what we're going to discuss. All right. OK. So now, I'm going to define bias and variance in a little bit more formal ways. Still not very formal, like I'm going to start with it's a gradual process. I'm going to have a little more formal definition of the bias. OK. And I'll show some examples. So any questions so far? Why is the bias [INAUDIBLE]. Why is bias this crazy? Oh, squared, I mean. Oh, this is just-- maybe I should draw. This is just because it's kind of a unit thing. You define a bias to be the-- it's just a-- how do I say this? It's a definition. Actually, some people call the bias square bias, actually, in some literature. Sometimes, people take a square root. It's just how do you choose the right unit. Yeah. And when I say bias, I don't really distinguish whether it's squared or not. OK. So I guess, what I'm going to do is I'm going to have a running example, which is basically like this. And I'm going to kind of try what happens with linear, fifth-degree polynomial. And kind of use this kind of as a thought experiment to demonstrate these quantities. So let's start with linear. And sometimes, this is a thought experiment, but actually, we have some real data experiments in the lecture notes. Here, I'm just drawing this. But I think it's pretty much the same. So suppose you-- OK. Maybe I'll set up just really quick. So my running example is basically like what I drew above. So I'm going to have some training examples. And these training examples are something like yi is equals close to a quadratic function. Quadratic, which is just this quadratic, imagine, of xi, plus a little bit noise. This is a small noise. So that's why these blue points are not exactly lies on the quadratic. It's just there's a little bit fluctuation. And sometimes, I think, I guess this quadratic. Sometimes, they are called this h star xi. Just for the sake of terminology, I think, sometimes, they call this the ground truth. And sometimes, the true function you are trying to find out. But of course, you don't know it. You want to try to recover it. And I'm going to do a thought experiment first. I'm going to do a few experiments. I'm going to start with linear model, and then I'm going to try fifth-degree polynomial, and then I'm going to try quadratic. So linear model. Suppose you have a linear model. I guess, you can probably see, guys, what will happen. So I would draw this again. So you have these four data points, something like this. Then what happens with linear model is that you have these four points, what's the best linear fit? Probably would be something like maybe this for this particular data set. And you can see what are happening here. So maybe-- let me see-- how do I-- maybe let me erase this for a moment. I'm going to redraw this again. So for linear models, I guess, you can see a bunch of properties. So you can see that there's a large training error, training loss or training-- let's call it loss just for consistency. There's a large training loss because I guess-- what's your prediction on the training data set. This is your prediction for this x. So this is x1, and the prediction is here. And the prediction for x2 is here. The prediction of x3 is here. The prediction for x4 is here. And you look at the distance between the prediction and the true label. You see that the distance is pretty big. The training error is pretty big. So this is underfitting, by our definition of underfitting because the tuning is already big. And now let's think about so what you should blame. Why the training is big? What's the culprit? The culprit, I would argue, is that it's just because no any linear model can fit your data. It's not just-- no any linear model can work. And it's not because you don't even have enough data, it's just because even you have more data, a linear model wouldn't work as well. So this is just because the linear model is not expressive enough. And this is called bias. So this is called bias. So when in this kind of settings things happens, like you have the bias. So the bias is basically like it's saying that the reason why-- I don't know exactly why people call it bias in the very first time. But I think you can-- see kind of the relationship. The thing is that you are imposing additional structure. So you are imposing a linear structure, but the true data is not linear. So it doesn't matter how many data you see, as long you impose this, you just insist that I just believe that this thing is linear, you're going to fail. Because this is the wrong belief about the relationship between y and x. So that's why this is called bias. And you cannot mitigate-- cannot be mitigated by more data, as I said. And actually, it can also not be mitigated by less noise, even though there is-- and by less noise data. Because even you have more data and with less noise, you can imagine what happens. So suppose, you see a little more data. Suppose you see some more data as training data. And maybe let's say, you just-- suppose in extreme case, you just see everything exactly on this quadratic line without any noise, still, if you think about what's the best fit. For example, let's say just see all of these blue and green points. And what's the best fit? The best fit probably would change a little bit. That's true. It probably wouldn't be exactly this. Maybe it would be-- I guess, it would be something like this. Maybe something like this. I don't know. You have to trade off here. Because whatever you fit, if you fit this, then you don't fit some of these examples. If you do-- there's no any option. Like whatever, it just because the model cannot represent quadratic function. That's it. So that's the typical situation, where you have a large bias. And mathematically, so the way you define bias, so here, I'm just only talking about some characteristics of having a large bias. So mathematically, one way to define a bias is that you can say this is the-- So bias is-- I guess, actually, there's some approximation here, depending on what exactly your model is. But roughly speaking, it's the best error or loss, you can get with even infinite data. So I guess, suppose you have infinite data, you have a data set with infinite data, following the same kind of property, so like all generated from this quadratic noise, then what's the best you can do? And that's called bias. And you can kind of see that it's probably important for bias to be small because if bias is large, even with infinite data, you cannot do anything. And that's the problem with linear models. Any you question? Can bias because something like the distance from the [INAUDIBLE] model or something like that? I think that's pretty much-- so for this case, they pretty much are the same. So basically, so in this case, it's exactly true that the bias is the best linear model. So the closest-- like the closest linear-- the model that is closest-- the linear model that is closest to the ground truth. And that error, that closeness, is the bias. Because when you generate infinite data, basically you just generate the ground truth, the whole line, if you have no noise. And by that, you mean, like I don't know. Within the same model. You're not changing the model, right? You are not changing the model path. You're only using linear. Like you cannot-- OK. So bias would be the best [INAUDIBLE] linear model [INAUDIBLE]. Exactly. So in some sense, technically is you say bias is property of the family of models, right? So the linear model family has a large bias, right? Yeah. We are always talking about model family. So we are talking about either linear model family, the family of linear models or the family of fifth-degree polynomial or the family of quadratics. OK. Cool. So this is the bias. And now let me talk about the variance. And here, there is-- I'll come back to the variance for this model. But here, the variance is, in some sense, you can say, it's not very important. Only the bias is the culprit. And now, I'm going to show cases where the variance is the culprit to blame for. So I guess, I'm going to redraw this. So you have-- [INAUDIBLE] four points. So now, I'm going to fit a fifth-degree polynomial. So the model is something like h theta x is some theta 5x to the 5 plus up to theta 0. But recall that we can do this with linear regression because you just-- this is still linear in the theta. We have a homework question on this. We also talk about how to do this with kernel methods if you care about efficiency, so on and so forth, right? So we are able to fit this. And in the lecture notes, actually, there are some visualizations of the real models you're going to fit. So here, I'm just going to draw it. So if you fit the fifth-degree polynomial, so probably, you're going to get-- a fifth-degree polynomial can go up and down so many times, several times. I think, technically, a fifth-degree polynomial, you can have, I think, four local maximum or minimum, four or five, something like that. So the higher the degree is the more times you can go up and down. Because if you have a quadratic, the only thing you can do is this, or maybe this. And for cubic, you can do this. And for fourth-degree polynomial, you can probably do something like this. So the exact details here don't matter. So just the point is that if you have high degree polynomials, you can be more flexible. And then if you fit the data-- if you fit the polynomial to the data, then possible, you're going to get something kind of pretty flexible, something like this. And actually, if you really look up some-- like this is not required for this course, but if you look up the book for the calculus of like polynomials, you know that if you have four points, there's always a fifth-degree polynomial with a path for all of them. So in some sense, if you don't have enough points, and your degree is high enough, then you can always make the training error 0, literally 0. So in this case, the training error is literally 0. So I guess, this is expected. And the thing is that this is overfitting. So what's the problem here? Why it's overfitting? So why the test is not good? So in some sense, the intuition is that this kind of model fits. So it's fit to the spurious patterns in the small and noise data, small and noise data. So this is because you don't have enough data, and your model tries to explain all of this small perturbations, small noise. And because it overexpressed the small noise, at loss, it kind of like didn't pay enough attention to the more important stuff. And the reason why you can overfit to the small noise, the following data, is because you are so flexible. So whatever patterns you see in this four points, as long as you just have four points, whatever crazy patterns you see, you can always find a degree of 5 polynomial to explain it. So whatever patterns you see in four data points, like you can explain it. So that doesn't sounds right. So like how come your model can explain like everything and anything like a random. So basically, you are looking at-- you are kind of like overfitting to the spurious patterns, but instead of the big pattern. So the big pattern is this. The spurious patterns are the fluctuations in some sense. And so in other words, I think you are explaining the noise instead of the ground truth. And again, how do you make this intuition a more formal? OK. I'm not going to go very, very formal, but to some more kind of things I can say about this intuition is that this is saying that you are sensitive. Your model is sensitive or maybe kind of like specific to the noise. How do I formulate this? Like one way to kind of formulate this a little bit more mathematically is that you can consider to redraw the samples. And you ask whether after you read all the samples, are you going to see the same model again? So you redraw some new samples with different spurious patterns. They are spurious because they are noise. If your model is specific to the spurious patterns, that means that if you redraw, you are going to-- you're going to learn the new spurious patterns. And you are going to have a different model. And if you are not specific or sensitive to spurious patterns, even you have a new data set, you probably shouldn't change much, but you should still be somewhat the same. You should still opt for the same model. And it turns out that if you have the 5-degree polynomial , you redraw the data sets, then you will find a new model. So what happens is that suppose you reach all the data sets-- in the lecture notes, there are some real experiments, again, but here, I'm just going to draw them. So suppose, for example, now you still have the same ground truth, but you observe some-- maybe, let's say, here, I'm going to have something a point like this, maybe they're like point like this. Maybe we want to keep this. And I'm going to try to make the pattern rather different. Then, maybe you're going to get something different. Maybe, I don't know. You try to find out what the degree of the polynomial, maybe you want to get something like this. OK. Actually, these two are still similar, but I can't draw anything. Empirically, you will see that they will be different, just because any small perturbations of this would change a lot. But maybe, you got this. Actually, you can also do some local thing, where suppose you move this points a little bit lower, then you probably will change this function a lot. So just because you are very sensitive to the data points. I guess, we got the same number of samples. [INAUDIBLE] not using the number of samples. So far, I'm saying that you draw the same number of samples with similar ground truth-- the same ground truth and the solution. But just their randomness are different. You are using different noise. And that's a good question. That's exactly what I'm going to talk about next. OK. Sorry. One moment before that. So basically, OK, just to summarize here, if you redraw all the examples and you find that a large variation between-- so suppose, you have a-- so you so you have-- so you call this-- So basically, you define a variance to be, in some sense, the variations across models learned on different data sets. So for example, you draw five data sets, so each data set has four examples, maybe. And you do this experiment. And you get five models learned on five different data sets. So if you see a lot of differences between these models, so then, that means you have large variance. And if you don't see a lot of differences, then you don't have a large variance. That's the somewhat formal definition of this. We will have a little more formal version of this, but this is the idea. So maybe, for example, if you get a new data set, you get something like maybe here, here, here, here, and maybe you're going to learn something very different, maybe something like this. So here, at least, you can see this one is very different from this one because on the left hand side here, you are going up, here, you are going down. So that suggests that you have large variance. And now talking about data, so suppose, so this one of the characteristic of variance is that variance is something that can be reduced if you have more data. And in some sense, the variance is caused by lack of data. And it can be mitigated if you have more data. So let me continue here. I should just keep all of these markers in my hand. Otherwise, I have to walk back and forth. OK. So the variance, and sometimes, you can say this is caused, at least, partially, at least one cause is that this is caused by lack of data. And of course, it's probably, you cannot say this is only caused by loss of data because if you have a different model of a variance-- and sometimes there are two reasons. One thing is like you have lack of data, and the other is you have too expressive models. And these two things are kind of like relative to each other. So if you have a very expressive model, but your data is really, really big, then probably, it's OK. On the other hand, if you have not too many data, but you have very, very simple model, then it's probably still OK. And as you can see, then, if this are the issue, the reason, then how do you mitigate the variance, then the mitigation is just that-- the mitigation is that either you get more data or you have simpler model. So technically, you don't have more data. If you have more data, you should already use them already. But for the understanding, let's see, for example, what happens if you have more data with this thing. Suppose you have more data, and you still fit a fifth-degree polynomial. So suppose you have a lot more data. This is the ground truth. And you observe a lot of more data. So you have a million data, roughly. There's a little bit fluctuation, of course. So now you want to fit a fifth-degree polynomial. What happens will be that this is probably not entirely obvious-- OK. One obvious thing is that you probably wouldn't do anything like crazy as this right. Because if you do this crazy thing, maybe this crazy thing goes through some points, but you cannot go through all the points. Like for example, you can see here, there's a big match between this part and this point. And here, you have some mismatch, right? So this one wouldn't give you even a small training error. So this is not a best model fit on the training data. So what you really will fit, like if you minimize the error on the training data with this so many training examples, then what you will get is probably something like this. More like this. Maybe there are still some small fluctuations. It's not like necessarily matching exactly the ground truth, but you have a small fluctuation, but it will be something like this. Because if you don't do this, then you wouldn't fit the training data as good as well. This is kind of more like a quadratic. But a fifth-degree polynomial-- the family of the degree 5 polynomial contains the family of quadratic function because you can just set your theta 5, theta 4 to be 0, then you get a quadratic. So empirically, what you're going do to find is that probably, if you really look at the details, the best fit model is to degree 5 polynomial. But the set of five, therefore, the first few coefficients are very, very small. So effectively, you are just very close to a quadratic. [INAUDIBLE] What is suppose to be errors because with complicated models, it's harder to train [INAUDIBLE] because it's so many more possible local minima [INAUDIBLE] So the question is that another possibility is that a failure mode is that you just couldn't find this degree 5 polynomial because some optimization issue. Even though there exist one, that is very good that fits the data, but you couldn't find it. That's probably not true for degree fifth polynomial for this one toy example, just because this is very simple. But it could be possible for some other cases, where the model does exist, but you can find it. So this is something that we don't discuss, at least, in the scope of this lecture. So in this lecture, we are assuming that you can just-- optimization always works. You always find the best model. So if it exists, then you can find it. So that's why I'm like in this case, even have a lot of data, and even you have a very complex model as a degree 5 polynomial or even degree there's always exist one model that works, which is like something like this, like the ground truth. And we'll find it. For this case, definitely, we will find it because it's a linear regression problem. You will find the best model. OK. Cool. And also, another, maybe just to answer the question. In some sense, the problem you are referring to is easier to detect in some sense to some extent. It's not always true because at least, you can detect that from the training. So here, we are more talking about generalization. So OK. Cool. So any other questions? What happens to the [INAUDIBLE] You got more data. You're getting more data. Yeah. Yeah. So here, when I say more data, I really mean that you have-- you just collect-- you have more data from the same distribution. From the same distribution. From the same distribution. Yeah. Yeah. So like if you collect more data from-- yeah. So like in some sense, you kind of like the mindset-- I'm not saying this is universally applicable to every situation, but the mindset we are in is that, for example, you have-- how do I say that? You have a lot of like medical images. So like they are-- for example, there is a million patients with the cancer diagnosis kind of thing. But not all of the data are labeled. So like only probably, at the beginning, four images that are labeled as cancer or not, so on and so forth. But these four images are samples from this big population. And now, I'm asking I found out my variance is very big. So how do I mitigate that? So probably one thing is that I can just sample more data from the same-- I have like 1 million and label examples. I had four labeled ones, and now I say, I'm going to collect more labels. So I sample like another like 100 examples from the same distribution, and then I label them. And then I run the algorithm, and the variance will be smaller. [INAUDIBLE] Actually, [INAUDIBLE] to ground truths of the data. How do we know the [INAUDIBLE] the ground truth [INAUDIBLE] it is a linear structure. So the question is that how do you-- like if you don't know the ground truth, so how do you know that you are having a large bias? You cannot really exactly know. When you don't know ground truth, so all of these are so far are for analysis purpose. So when we don't know the ground truth, you cannot really exact like-- let me think. Yeah. When we don't know the ground truth, I think you cannot exactly compute the bias. Because the definition of the bias, actually, requires you to sample a lot of data. So you also don't have infinite data. So there is no way you can evaluate the bias, exactly. So typically, what you do is you say, you fit the data on the training set. And you see you're underfitting. And that's when you say-- underfitting means you have a large training error. And that's when you start to believe that you have a large bias. For overfitting the graph that's right behind you, the bias square [INAUDIBLE] what's the third one? The third one is the sum of them. This is the test error. OK. And the bias is the total of them? Bias is this one. [INAUDIBLE] I'll discuss it. I'll discuss that in a moment. Because I didn't-- when I draw this, I didn't even tell you what this is. I'll go back to come back to this. Are we [INAUDIBLE] For highly imbalanced data set? So maybe let's discuss this offline. I'm not sure whether this-- I think, it probably requires more-- the imbalanced data set is pretty often. Like we have research on that. But maybe it's not exactly related to the context here. Maybe we can discuss offline. Any other questions? OK. I think I do have something to say about the variance, and then I'll come back to the trade-off. All right. OK. So now, let's see. So let's briefly summarize. So basically, if you have a bias, this is really just about the lack of models expressivity. It's something intrinsic, nothing to do with data, right? This is just the lack of-- if you have large bias, that means you have a lack of expressivity. The model is not expressive enough. Doesn't depend too much on the data. I guess, for linear models, you can just say, doesn't depend on the [INAUDIBLE] of data for non-linear models. There is some technicality, which you don't have to make-- like the only reason why I had much is just because there's some technicality that prevented me to say this is exactly irrelevant to number of data. But you should basically just believe that it's intuitive. It's not a notion about how many data you have. It's really about how expressive your model is. And variance, if you have a large variance, then it could be two things. One is lack of data. And another thing is that you have too complex of a model. I guess, I'm just repeating and summarizing. And then, I guess, we can see this trade-off. So I guess, I'll go to here. And also, there is way if I were to prove that test is equal to a bias plus variance. I don't think I have-- I will see what I have time to discuss that. But you can also prove the test error is equal to bias squared plus variance. But maybe, let's just draw this from scratch. So this side is the model complexity. So let's first think about how to draw the bias. This is the test-- this is how do you draw the bias on this curve as smaller complexity change. So we say that the bias is large, it's because the model is not expressive enough. So that means that if your model is more expressive, then your bias should decrease. So that's why the bias is a decreasing function as the model complexity. So this is the bias. And now let's think about how do you draw the variance on this thing. So we said the variance is caused because you have too complex of model. That means if your model is more and more complex, then you should have bigger and bigger variance. That's why the variance is like this. And a test error is the sum of them. So the test error is like a U curve thing. So the test error-- So the test error is the sum of these two. And so the question you want to answer is that if you change the model complexity, what is the best test error, right? So it means that it's somewhere in the middle. So actually, I'm going to tell you something different from this in a moment. But suppose you believe in this, then what the conclusion, the implication of this is that you should somehow kind of find a sweet spot when you choose the model complexity. So for example, maybe at the beginning you find that your training error is very low. Sorry, training error is very high, which means our bias is very high. So suppose your model complex is here. Then suppose your model complex is very small. Then what happens is the bias is high. And the bias is high, it means you are underfitting. It means that your training error is big. So basically, when you see the training error is big, you kind of see your biases. You kind of believe that your bias is too high, so that's why you should increase the model complexity. And at some point, you find that you are in other regime, where the variance is too high, then you should stop. So basically, you increase the model complexity to some extent until your bias and variance has a right trade-off. One of bias and variance first change, did you use different type of model [INAUDIBLE] So I think this figure, so this is the-- OK. You ask a good question. So here, this is the model complexity of the model you use to learn your parametrics model. So when you're asking about what happens if the ground truth is different. I think this is not very sensitive to what the ground truth is, right? There's always a trade-off. But where the trade-off comes from, where the sweet spot is, would depend on the ground truth. So for example, actually, that's a very good question. For example, suppose you, for this data set, So probably, the best thing is to use quadratic. Quadratic has small enough bias because quadratic is, in principle, expressive enough to express our data. So that's why quadratic has small bias. And also, quadratic is probably, among all the models with small bias, among all the models that can express your function, quadratic is the least complex. So that's why you use quadratic. That's probably the best solution. And if you really run the algorithm, the quadratic, you would probably recover something very close. But if you're going to, it is cubic, then maybe the sweet spot is like the best trade-off is achieved at cubic, maybe. They don't necessarily have to match each other because it also depends on the data, how many data. For example, suppose you are-- maybe let's give you an example. Suppose to say, your ground truth is a degree 10 polynomial. But it's somewhat look like a linear function. So suppose your ground truth is like almost linear, but with a little of a kind of like small fluctuation. But you don't have a lot of data. You just have like five data points. So you just have five training data points. And now, if you want the bias to be literally 0, then, of course, you should use degree 10 polynomial because that's only case you are expressive enough. But then your variance is too big. So the trade-off here probably is closer to be a linear. Because if you use a linear, your bias is not zero, but still small enough, right? And in that case, the variance is small. So the bias, the trade-off, depends on, for example, how many data you have as well. [INAUDIBLE] We just [INAUDIBLE] for the loss function. So how can [INAUDIBLE]. That's a good question. And the answer to that is that no, you cannot compute the bias and variance. And all of this, all of what we discussed today is more about some internal understanding. So this bias and various is not something you can-- at least, in some case you, can estimate them a little bit. But typically, you probably shouldn't really actively estimate the bias and variance in your-- these are mostly just for-- its internal understanding for our research, for ourselves, but not necessarily something you, empirically, evaluate. So I guess, so one question, I guess, many of you probably are wondering, if all of these quantities cannot be even evaluated, how do you choose the right trade-off? What's the optimal model complexity? So what you do is actually, that's going to be, I think, what we discuss mostly next week, next lecture. So this Wednesday, next week. Yeah. So the variance and bias are just for understanding. Empirically, what you really do is that you try a lot of different models. And you select based on a validation set. But this picture would help you a little bit in some sense. Because for example, suppose you have tried this and this and this and this, suppose you have twice four model complexity. And suppose, you believe that this is a U curve, the test error is a U curve. Then should you try even bigger models, bigger family of models? Probably, you should. Because you kind of believe that it will be even worse. So you should just try even more in the middle. So that's what this understanding will help you. OK. So there is some more formal definition of the bias and variance. And that's in the lecture notes in section 8.1. I think I don't have time to discuss the formal definition. Even I give the definition, I probably wouldn't be able to give you the proof. The proof is actually relatively simple. So if you are interested, you can read that section yourself. I don't think it's required for the exam or anything, but it's a relatively simple word if you're interested. And also, just this kind of bias and variance trade-off, it's not that always easy to achieve, mathematically. So for square loss, there is a classic, well-established kind of decomposition. But if you don't have square loss, you don't have MSE, that means squared error, if you have cross entropy loss. Actually, it was an open question. How do you formally decompose this? So all the intuitions do apply. But like how do you do the mathematical decomposition is actually pretty challenging. So that's why in the lecture notes, we only talk about square loss. And anywhere, if you read any textbook or any literature, probably, they will talk about square loss. But the intuition is still kind of fun. So if you don't care about what exactly definition of bias is. So I will spend the next 20 minutes to talk about a new-- something that is actually challenging this picture. So something that-- so this is maybe just follow more context. So this kind of like a U curve test error and bias-variance trade-off. This has been like discovered or kind of like analyzed for I don't know how many years, maybe like 40 years or something like that. I'm not a historian, so I don't know exactly which is the first time this is discovered. But this is like a very classic. However, people realize that there are some issues with this understanding, especially we realize that in deep learning, like you-- actually, people start to realize this in deep, learning but actually, it turns out that even this understanding has an issue for linear models. So this understanding is not complete. It misses some other things. So that's what I'm going to talk about. And this is an area of research productive in the last, probably, three or four years. So let me try to find out where should I erase. So this phenomenon that people observe, empirically, at the beginning, and then analyzed theoretically, this phenomenon is called double descent. If you are a historian, then I think actually this phenomenon actually dates back to something like 1990. Some papers, actually, at that time, also point out this issue. But I think it just becomes popularized and more relevant these days. And what does this mean is that, so basically, I've told you that this is test error. This is model complexity. I guess, technically, here, I'm writing the number of parameters because I want to be precise. Like I'm measuring the model complexity by how many parameters you have. And the classical belief, as we discussed, is that this test error should have this U curve. Something like this. But then, people realized that this is a striking thing. So people realize that if you increase your model number of parameters even more, at some point, you will see that it will be like this. So basically, this is the new regime that people got. This is the second descent of the test error. That's why it's called double descent because there is a decent here, there's a descent here. Everything in the blue part, is what people didn't realize as much as in the last four years, last four or five years. And these are the so-called overparameterized regime. So which means that in this regime, typically, the number of parameters is larger than the number of data points. In some sense, this is the regime that if you ask someone 20 years ago, then they will say, this regime is just that no go zone because you should see very, very bad test error But it turns out that if you have more-- you make it even more extreme, you make the number of parameters bigger in the number of data points, you may actually, not in all cases, but in some cases, you may see the-- actually, I would say, in some cases, like in many cases. I'm not sure how to quantify this, but at least in a lot of cases, you will see a second descent. So that's the striking thing. Is this because we are [INAUDIBLE] the one with much more data? Not directly, I say. Because this is-- at least, on the surface, if you look at this, so this regime is the regime, where the parameters is bigger than the number of data points. So if you want to find the right course, I'm not saying-- like you probably will say, at least, to be in this regime, probably, you need to compute. You need a lot of compute because probably, like 10 years ago or 20 years ago, you cannot even afford to run experiments in this regime because you don't want to use that many parameters because you don't have enough compute. But of course, nowadays, we also have more data points. And because we have now more data points, because we are using networks, we run larger and larger experiments, indeed, it's correlated with more data points. Like we do see more data points in these days. And this is the so-called double descent phenomenon. And this kind of mysterious. It's about less mysterious these days like after people have studied this in the last five years very carefully. I would talk about some of the explanations, intuitions. But before that, let me also give another related phenomenon, which is also called double descent, but it's called data wise double descent. So here, I'm doing a similar-- I'm just showing a similar graph. But on the x-axis, I'm going to change the number of data points. So here, the y-axis is still a test error. And the x-axis is the number of data points. OK. Maybe you have a guess first. What this curve should look like? As you have more data points, how does the test error change? Right. The guess would be the test error would be decreasing. Because I guess, here, at least if you believe in this bias and variance of intuition, then the bias doesn't seems to depend much on the data. The variance will be smaller and smaller as you have more and more data. So then what you-- if you believe in that, then you should say that OK, the test should look at this. And it should continue to decrease as you have more and more data. And it turns out that, actually, in many cases, what happens is that the test error will look like this, or increase, at some point, and it will decrease again. And this peak here is kind of similar to the peak here. So this peak is often happening when-- it's roughly equal to d. I guess, by the way, here, like there-- this is active research area, so I'm not being very precise in every places. So this is number of examples. This is number of examples. This is number of parameters. So what I set here, I think, is basically mostly kind of 100% correct for linear models. But for nonlinear models, whether this is exactly is equal to d or not is in 2d or the relationship is less clear. But let's suppose, when you think about relatively simple models, then when n, the number of data points, is closer to the number of parameters, then in this case, you're going to see a peak. And then after that, you have more data. It actually helps. I saw some questions. So the original double descent, does that like continue to decrease or does it eventually increase again? So in the first. Yeah. So in the first figure. This is a good question. So I think I've seen, empirically, on both cases. So sometimes, it does increase again a little bit, but often, not much. And sometimes, it just keeps decreasing. And sometimes, it plateaus. So I think that's why people probably don't study that part that much. Can you start running again. So this is, again, more than this function. This was like this has been for a while. For this one? Yeah. This phenomenon. This one, I think, is also-- actually, the paper that first systematically discussed this is like 2020. About that peak, when was that discovered? The peak? Yeah. This is discovered in the same paper, the peak. It's not monotone. The fact that there exists a peak was also discovered right, essentially. Yeah. I think, at least, they might like [INAUDIBLE] learning happens so often that someone does something, and then the community forgot about it. That's possible. But at least I would say, at least, it's only until 2020 that most people start to realize this. And because of that paper. And what's in the [INAUDIBLE]? I think the paper just called model wise double descent or something like that. I'm sorry. Data wise double descent. Because this is data wise because you are changing the number of data points. OK. OK. This sounds like a mysterious enough. So like a very, very interesting. And what's the explanation? In the last few years, people try to explain what happens, and try to reconcile with our old understanding about this. And also, this is an important question because this regime, this blue regime, is actually-- actually, it's not clear whether when you run like a classical linear, models I don't think necessarily, you are in this regime. But at least, it's pretty clear that it's more true that for deep learning, you are basically always in this regime. I guess this is still-- nothing is never universally true. But I think for most of the vision experiments, you are in this regime, where you have more parameters than a data point. So this is something that is really like empirically irrelevant. So that's why people really care about it. And maybe another thing I need to clarify is that I think I probably mentioned that the study about linear models, the phenomenon of linear models is more kind of clear, like there are a lot of studies. And we have pretty good conclusion. And what I mean by that is that even within linear models, you can try to change the model complexity. So what that means is that you just insist that you always use linear model. But what you change is that you try to decide how many features you use. So you can start with only using one feature or two features like for example, in the house price, where you can use the square foot as the single feature, or you can collect a bunch of other features. So keep adding more and more features. That means you have more and more parameters. So even within linear models, you can still change the complexity, just to clarify that. And most of this theoretical study, I think, are for linear models. And they are pretty precise these days. And I'm going to try to kind of roughly summarize the intuition from the study of this double descent. So the intuition, I think, I'm going to list a few of them. So some intuition and explanations. And these explanations are mostly for linear models. So I think the first thing to realize is that this peak, so you can argue what is the most exciting or surprising thing about this graph. But let's first talk about a peak, this peak in the middle. So I think the first thing is that in some sense, people realize that the existing algorithms, especially if you just talk about, for example, simple gradient descent or stochastic gradient descent for linear models. So the existing algorithms underperform dramatically when it's close to d. So both these two peaks are basically like this. So here, you are changing n, the number of data points. And you found that when n is close to d, you had to pick. And here, you are changing the number of parameters, you are changing d, the number of features you use. And we realized when d is kind of above n, above the number of data points, you had a peak. So both of these two peaks are showing up here. It's just that you are changing the axis in some sense. So this is also when n is close to d, when the number of data points is close to number of features. And the explanation is such is the algorithms, the existing algorithms, or the algorithms you are visualizing here. So when you visualize this, where you do rank-- you do use some algorithms to learn the parameters. So that particular algorithm that you use to produce this graph, it really underperforms very dramatically. It's not really saying that when n is close to d, the real test error should be this. It's just saying that this algorithm is bad. If you change your algorithm, you probably wouldn't see this peak. So that's why the peak shows up. OK. And what's wrong with the-- the existing algorithm, I really, just mean that for example, some just basic gradient descent. So for linear models, maybe this is-- I say, this is for linear models. So what goes wrong with the so-called existing algorithm? So this basically gradient descent algorithms. What goes wrong is that a norm of the theta, the linear models you learned, is very big. It's very big when n is roughly equals d. And we kind of believe that this is, at least, a partial reason for why this leads to a peak. So this gives the peak. So even though-- so OK. So I guess, let me draw something here. We have some real experimental, real data, in the lecture notes. But if you draw that norm-- so suppose you change the number of parameters, which means you add more and more features in your data set, so that you have more and more parameters. And if you visualize the norm in the y-axis, you're going to see something like this. And this peak here is roughly corresponds to n is close to d, which is kind of similar to these peaks. So basically, even though, suppose you compare this experiment and this experiment, so here, you have more parameters than this here. But when you have more parameters, maybe sometimes, you have lower smaller norm. So the norm when n is close to d, for some reason, it is very, very big. Actually, we know the reasons. The reason is that some random matrix is not well behaved when n is close to d. But I guess we are not going to go into that. But at least, the immediate reason is that when n is close to d, somehow, this algorithm is producing a very large norm on classifier theta, which is you can argue that if the norm is too big, then your model is too complex. So in some sense, this is saying that your model is actually very complex. So very complex on [INAUDIBLE] to the norm. So this model, it seem it doesn't have a lot of parameters compared to, for example, this model. So if you compare this model and this model. So this model seems to have less parameter than this. That's by definition. The norm is actually very big. So in some sense, if you use the norm as the complexity, actually, these peaks have large complexity. [INAUDIBLE] [INAUDIBLE] Exactly. One [INAUDIBLE] That's a great question. So you got that. I'm implying that the norm seems to be a better metric for the complexity. So what is the right measure for complexity? So this is a very difficult question. Like for different situations, you have different answers. But there is no universal answer. But norm could be one complex measure. In some sense, the norm is also a way to describe how many, like suppose you have a small norm ball. So you have fewer choices to fit your data in some sense. So you have fewer degree of freedom if you have-- like you have fewer options, in some sense, to fit your data. So that's restrict the complexity. And which norm, that's actually, for different situations, you can argue which norm is the right complexity. Actually, there's probably no universal answer. But I guess what I'm trying to say here is that the number of parameters is also not necessarily the right complexity measure. Because even you have more parameters, suppose all the parameters are very, very close to 0, that's probably also very simple model because those parameters are not really working. But if you have just a few parameters, but the norm is really, really big, maybe you can use the-- maybe you should also call it very complex. So my short answer is that there is no universal answer to this. The point is that probably the number of parameters is not the only complex measurement. And for linear model, it just happens that for mathematical reasons, I think l2 norm behaves really nice. It seems to relate to a lot of fundamental properties like maybe you can argue l2 norm is useful because you are measuring a square error in many cases. And it's nice with the linear algebra, so on and so forth. OK. So I guess, let me-- I'm running a little bit late, but I think I'm almost done here. So here, it's just saying that at least for this case, it sounds like norm seems to be a slightly better complex measurement. And actually, if you-- and you can test this hypothesis in some sense. So you can say that OK, I'm saying here the existing algorithm underperforms. But if you have a new algorithm, that's regularized, suppose you recognize the norm. I guess, I haven't told you exactly what regularization means. But here, just what I mean is that you try to find a model such that enormous small. So you add an additional term that tries to make the norm small. So you don't only train on the training loss, but also you try to make the norm smaller. Then you're going to see something like this. So regularization would mitigate this to some extent. I would discuss more about regularization in the next lecture. But here, it really just means that you don't only care about training loss, but also you try to find a model with small norms. And you have some kind of balance between them. So you can sacrifice a little bit of training error, but you insist that your norm is small, then you can see this rise. So that, in some sense, explains, partially, why you had the peak. Because the peak is caused because the algorithm was suboptimal. Your algorithm didn't use the right complexity measure. And you can fix that peak by adding norm. But there's one more question, which is there is no peak, but why there's no ascent? So suppose you just see this. Actually here, you will also see this, something like this. So this figure is actually pretty reasonable. Because if your data point is increasing, you probably should just have one decrease, like you just keep decreasing. You just keep decreasing the test error. So this one, let's say, we are OK with it. We are happy if you see just a single decrease. But here, suppose you see a single descent, I feel like it's kind of arguable whether you're should be happy with this answer because why, when the number of parameter is so huge, you can still generalize? So why, when you use, for example, a million parameters, and you just have five examples, why you can still generalize? Why you don't have ascent, eventually? In many cases, you don't have ascent. And in many cases, the best one is just you have more and more parameters. And actually, for example, another question is when number of parameter is bigger than the number of data points, sometimes, you are thinking this is the-- you have too many degree of freedom to fit all the specifics of the data set. You shouldn't generalize. But actually, empirically, you do work pretty well. So that's the last, and sometimes, another missing point, missing part. And this part, we also have some explanation for that. And the explanation is that-- so n is much, much bigger than d-- sorry. d is these much-- the number of parameters is much bigger than the d. Sorry, much bigger than the n, the number of data points. So the thing is that even though it sounds like you are supposed to overfit, but actually, the norm is small. But why the norm is small? Why, when you have so many parameters, you still learn very simple model? The reason is that somehow, there is some implicit regularization effect, which makes the norm small. So when I applied this method, all of these experiments, they need to have any regularization. I didn't have any explicit encouragement to make the norm small. So that's why the norm here is very big. But the norm here is small? The reason is that your optimization algorithm has some implicit encouragement to make the norm small, which is not used, which is not explicitly written in the loss function. And that's something I'm going to discuss, I think, more next time. So for this lecture, I think I'm just-- so we're going to discuss this more next time. So the high level thing is just that something else is driving the norm to be small. Thanks. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_GMM_EM_I_2022_I_Lecture_13.txt | Hello, so welcome to our lecture on the EM algorithm. So just to figure out where we are in the flow, because we kind of have this flow of looking through a bunch of these unsupervised algorithms. We've kind of got our hands dirty with k-means and GMM. And these were our first two unsupervised algorithms. And what we're going to try and do is kind of generalize what happened there so that we can use it in many different settings and move on from there. So last time we saw these two algorithms k-means GMM, if you don't remember, the GMM algorithm was the one that we had these photons that we were trying to fit Gaussians to, maybe three Gaussians that look like that. Don't worry about if you don't remember the details, just roughly what we're dealing with. And the big idea we encountered was this idea of a latent variable. And the latent variable in this setting, if you remember, was this fraction of points that come from a source. So we didn't know how many points were coming from each one of those light sources that were out there. We had to estimate that. Once we estimated that, then we would be able to go back and fit all the different parameters that are there. Awesome. So-- and the fraction of points, we also had to figure out the linkage, the probability that every source was coming from a point. And then we could do the estimation. And the main thing that I wanted you to take from both K-means and GMM was this main idea that we kind of guess the latent variable. This is a great way to put it and how we have in the notes. You guess where they're probabilistically linked, that is, what's the probability of these points belong to cluster one, this point belongs to cluster two, so on? And then once you have that, you then solve some estimation problem that looks like a traditional supervised learning. So the decomposition is quite important. And we're going to try and kind of abstract that away. And then we would estimate the other parameters. That's what I mean by a kind of traditional supervised thing. Now, today what we're going to do is we're going to take a tour of EM in latent variable models and try and cast them on a little bit more principled footing because last time, the calculations were, yeah, it kind of makes sense that that's the average, the weights in a cluster. We'll try to derive the action that we're doing there from a more principled framework, which is MLE. It doesn't mean that it's right. Maximal likelihood is just a framework. It happens to be, though, the framework that we use throughout most of the class. There are others in machine learning by the way. But this is the one we're going to use. So before I get started on the rundown, any questions there? I'll start to write. Oh, please. Yeah, for things we start by casting the views, what is the first step in GMM? What do we guess? We could guess randomly an assignment of every point to the cluster, the probability. Remember, there was this z(i)j. If you don't recall, we'll bring that up later. Don't worry about now. But we had to guess, for every point x, i, which of the k clusters it belonged to and with what probability. We could, for example, initialize that to uniform. We don't know anything, and that's something. We may have some other heuristic guess. That was what was going on in the k-means++. We have a smarter initialization. But that's how we get the process started. Once the process is started, we just keep running those two loops again and again, and hopefully, it will improve. And we'll capture in what sentence it improves. You'll see this weird picture of a curve that we go up, and that's going to be the loss function. Awesome. OK, so we're going to look at the EM for latent variable algorithms, and this is where it applies. This is what it's for is dealing with various different notions of latent variables. And I'll say this right now-- may be a little bit cryptic, and I'll come back to it either at the end of today's lecture and the next lecture-- when we pick these latent variables, there's a little bit of an art. What they're doing, basically, is they have that decoupling property. If we knew this thing that we couldn't have observed, then, all of a sudden, it becomes a really standard statistical estimation problem. And somehow, we are assuming structure, and that's what we're putting into the latent variable. So we're to see walking through this today that structure-- we're assuming there's this probabilistic map out there that says how often, how likely every point is to go to every cluster. In other cases, we'll see more sophisticated variants of this idea, but it's actually fairly profound. That's the real key idea. We can abstract all the algorithmic details into EM same way we did for the exponential family stuff, OK? Now, before we get started, I want to take a technical detour. And so it's really important that we have signposting here because you'll say, why is this guy drawing these weird pictures? The technical detail is I want to make sure that you understand this key result, which is convexity and Jensen's inequality. And the reason is-- I'll refer to this thing as we go through. We're going to use it. It's not like I'm just teaching you something for your health. Like, this is actually going to be used in the next step. It will actually, in some sense, be the entire algorithm. Like, if you understand this in the simplest way, then understanding the algorithm will make a lot of sense. So it'll become a clear point where we apply Jensen's inequality, where we make it tight. Those are the things that we're going to think about as we go through it, OK? So we're going to do this technical detail, right? I'm going to try and show it to you in pictures because I think it's the most intuitive way to understand the basic cases. If you already know it, don't worry. It's just another proof that you'll see. Then this will allow us to go to doing the EM algorithm as MLE. And what I mean is we're going to be able to write down a formal loss function, a likelihood function, right? That's what MLE is. We write down this loss function. Then we maximize the likelihood. And we're going to show that this actual algorithm is actually under the covers maximizing a likelihood function, all right? Then I'm going to come back, and I'm going to put GMM into this framework. And this will answer some of the questions that we kind of intuitively, kind of heuristically answered. Like, why are we estimating those parameters in such a way? And that will allow us to say, yep, GMM is an EM algorithm, and so it would give us a principle to solve for all the weights. If you remember, there was those cluster centers, the mus and the sigmas, the source centers, mus and sigmas and fractions, and we were going to solve for all of those. And this gives us a principle way to do it because we're in this MLE framework, OK? So we'll exercise it, basically exercise the notation. And then we almost certainly will not have time for this today, but I combine the notes, and we'll go continue to go through them on Wednesday. We'll go through what we call factor analysis, OK? And factor analysis is another model. The reason I want to show it to you is it's different than GMMs, so it occupies a different space, and it will kind of force you to look at the kind of decisions you're making, right? What are you modeling here? And in particular, we'll model a situation where traditional Gaussians couldn't fit the bill because we're modeling something that's huge and high dimensional. And we have to assume some structure to be able to get the whole program to work. And by comparing these two and what's similar to them, hopefully, you get a pretty good sense of what EM is and all the different places that it runs. All right, OK, so far, so good? So if there are no questions, we're going to go right into our technical detour, which will lead, then, into the EM algorithm as MLE. All right, so here's our detour. This is convexity and Jensen. So this is a classical inequality. And what I want to show you is that Jensen's inequality really is like convexity in another guise, OK? And it's a key result, so I want to go slowly. That's the only reason we're doing this, OK? So don't think that there's something super mysterious going on. There isn't. Hopefully, if I do my job well, you'll just look at the pictures and be like, oh, yeah, that makes sense. Here's the line. Let's see what's going on. OK, so first, we're going to define a set is convex. We did this last time, but just recall a set is convex if for any a and b element of omega, the line between them is in omega. So I'll write this in math so it's precise. Oops, is in omega, OK? So what does that mean? So let's draw the picture first, and we'll draw the math. Here's the convex set. So it means no matter how I pick a-- here's a-- and no matter how I pick b, the straight line between them, the geodesic between them, the straight line, is in a set. This is convex. OK? In contrast, just to see it's not a trivial definition, this thing here-- which I drew very crappily, but that's OK. I'll draw it like this because, actually, you'll see why the bottom makes more sense to me in a second. Here, we have one example. So if I picked a here and I picked b here, yep, the line is in the set, but that doesn't prove it's convex. It has to be convex for all of those choices. And here, if I put it for b, lo and behold, this would not be convex, OK? So let me write the math while I have the picture. So this is in symbols. For all lambda, element of 0, 1-- so this is how I parameterize going on the line-- a, b element of omega lambda a plus 1 minus lambda-- oops, accidental stroke-- lambda b is an element of omega, OK? Clear enough what that means? This is just the line between a and b, right? I'm just saying that no matter how I pick a and b and lambda, this thing still remains in the set, which is just capturing this picture. Cool? All right, now, we're going to apply this to functions. So given a function-- for right now, we make it one dimensional, Gf-- we're going to find the graph of that function as a set of x, y such that y is greater than f of x, OK? This is my definition. So a function is going to be convex if its graph is, OK? As I said. OK, so let's draw an example of this. So here's my function. Here's 0. Here's minus 1. Here's 1, and I draw this character, OK? So this, OK? So the shaded region here-- so this function, by the way, the one that's in my head, is going to be f is equal to x squared. So if you're trying to correct for my artistic shortcomings, this is f equals x squared. It's a parabola, kind of a bowl-shaped function, right? Now, no matter how I pick the points-- and clearly, I should really only have to worry about picking on the edge. So if I pick a point here, a, and I pick a point, f of a, pick a point b and pick a point, f of b, the line between them goes here, all right? It's not necessarily a straight line across. I just happened to pick it that way-- go up it, go down, do whatever it wants, OK? Now this function here, we'll imagine-- we'll talk about a point z later. So I'll just say there's a point z that's going to live in the middle, and this is b lives here. Let me erase 0 and 1 because we don't really need them. Their values are kind of unimportant to us. It was just so you knew what I was drawing. We'll draw a. All right, awesome. All right, now, what is this function? Well, this is a-- I think it's x minus looks like this. And then its graph is everything up here. And this is not convex for the same reason. I could pick a point here-- I pick a here. I pick b here, and the line between them is below the set. So this is a convex function. This is not convex. OK, so let's look at this in symbols. So clear enough? Hopefully like trivia. Like, oh, you just drew two pictures twice, and one, you said was a function graph, and the other one, you didn't. The function graph was open to the top, but that shouldn't be really disturbing. So far, so good. All right, so what does this mean? Means for all lambda element of 0, 1, lambda a, f of a plus 1 minus lambda-- these are as tuples-- b, f of b is an element of omega. What does it mean to be an element of omega? It means that if I take any z that's on the path, lambda a, to 1 minus lambda b-- so any character that comes in between here and here, then it had better be the case that lambda f of a plus 1 minus lambda f of b is greater than f of z, all right? So this is now z, f of z. This is z. This is f of z. Does that make sense? Just translating the definitions directly. In more cryptic language, we usually just tell you every chord is below the function-- or, sorry, is above the function. Sorry, I'm drawing the wrong way-- above the function. What does that mean? Well, a chord is just anything that connects two points here. So this character would be another chord. It lies entirely above the graph of the function, where the function actually lives. Here, that's not case. I just found two points so that the chord between them is actually below the function. So it's not convex. And intuitively, the reason I drew these shapes is that convexity for shapes probably makes more intuitive sense, 2D shapes. But now, hopefully, you see they're really the same thing. So your geometric intuition and the function intuition are the same, modulo that we change this definition here. OK? All right, now, let me see. All right, so one other bit here. So we're going to-- we'll actually prove this, I think, why not? Sounds like a fun thing to prove. If f is twice differentiable and, for all x, f double prime of x is greater than 0, then f is convex, OK? So this says these functions really are bowl-shaped, right? Second derivative being positive means that they have this kind of positive curvature that looks like the U's, right? Their first dimension-- first derivative goes up and down, but they're kind of always trending. That first derivative is always getting more positive, right? It's negative on the left-hand side, positive on the right-hand side. That's what it means by bowl-shaped. OK, so this isn't super hard to prove, but just because I think it will stall for a little bit, in case you want to ask me questions. I'm writing out a Taylor series for this. f double prime-- see the a, a minus z square. OK. Oop, plus. Let me drag that guy so it's a little clear. OK? And this a to a is just something in a, b. So maybe you remember this from your Taylor series. All I'm saying is I can write f of a at some point f of z plus some first derivative information plus some second derivative information. And then I'm using the second derivative remainder. So I'm saying there's some point on the interval where this is true, OK? Same thing for b. This isn't super important for your conceptual understanding, by the way. Like, this is just to show that you can do what you want to do here, that this makes sense to you. OK, minus z squared, handled it, OK? Now, I claim it's convex. So I just take what is the obvious thing to do. I'm going to multiply this by lambda. I'm going to multiply this by I have to make a statement about this, right? That's what's in my definition above, OK? Well, that's just the same as adding f of z for lambda, plus 1 minus lambda-- that's just f of z. So that's good. That appears. Notice here that I get 1 plus a times lambda plus 1 minus the lambda times b-- well, that's just equal to z exactly, right? So I get 0 here plus 0, right? This is just because lambda a plus 1 minus lambda b equals z. Plus-- these things are all positive. Plus some constant that's greater than 0. So that shows that this thing, this inequality, holds, f of z. Please. Oh, so you've seen the double [INAUDIBLE],, is that [INAUDIBLE]? Yeah, this is Taylor's theorem. Great question. So what's going on here if you remember Taylor's theorem is you can keep expanding, and then you have the last term, which is the remainder term. And the remainder term says, there exists some point that lives in a to b such that this holds with equality. And I'm just using the remainder form of Taylor's theorem. By the way, this is really not important for your conceptual understanding. I just want to show that it's one line, that this statement, which is sometimes mysterious about derivatives and causes people's heads to explode-- like, why are the derivatives connected to the convexity? It's because of this. This is all that's going on. Awesome. You can freely forget this and just use the fact, this fact, in the course, OK? OK, stalling done. Any more questions? Awesome. The real reason we want to go through the derivative thing is, otherwise, this thing, which we actually do care about, is strongly convex. This definition feels like it comes from space aliens otherwise, if for on a domain, is if f prime of x is greater than 0, strictly greater than 0. This is strict equality here, OK? That's where the strongly convex comes from, whereas this is actually strictly convex. Doesn't really matter, but OK. So for example, f of x equals x squared, which I told you was in my head. Well, this gives me a simple test, right? Its second derivative is 2. That's greater than 0. It is the prototypical strongly convex function, OK? You also saw those This is to make this parameter sometimes with the curvature one. Doesn't really matter, OK? The other function that I had, which you can check and graph yourself, is x squared x minus 1 squared. This is not convex. Compute the derivative. You'll see. But it's the one that looks like the two bumps, right? It's a quartix. So that's what it looks like, has two bumps, positive discriminant, OK? Awesome. So at this point, what do I care that you know? Not too much, honestly, about this. What I care that you know is that there's some way that you are familiar with geometrically what convexity means. And you know that there are these tests in terms of the derivative. The second derivative being nonnegative is a good test for convexity. And if you have a stronger condition, you can get this strong or strict convexity, OK? All good. All right, now, what we actually need-- Jensen's inequality. Now, if I've done my job well, this mysterious-looking statement, once I show you the connection, you go, oh, OK, that makes sense. It's because it's actually just saying something about convexity, but it's got a fancy name, and it's so useful, and it's the following statement-- the expected value of f of x is greater than f of the expected value of x so long as f is convex, OK? Why the heck would this happen? Let's take one example. Suppose x takes value a with prob of lambda. Then x takes value b with prob 1 minus lambda. Then what is it saying? It's saying the expected value of f of x is equal to lambda times f of a plus 1 minus lambda f of b. What is f of the expected value of x? Well, it's f of lambda a plus 1 minus lambda b. That's exactly the definition of convex, the inequality. This is convexity, OK? Now, the other thing I want to say for this is, notice that this does not matter how I pick lambda. Later, I'm going to define a curve, one way to define a curve. And that curve is going to be as a result of sweeping some parameters in a high-dimensional weird space. But basically, it says, no matter how I pick the parameters of that curve, anywhere that lives on this thing, that's a probability distribution, a bunch of numbers that sum to 1 in the discrete case. This inequality holds. And that's going to allow me to build a lower bound for my function, and I'm going to hillclimb using it. We'll see that in just a minute. That will become clear. Are there any questions about this piece here? All right. Now, you may look and say, OK, well, this is only in the case when there are two probabilities. What happens when there are more? You can just repeat by induction. You have to do something fancier if you want something that's a full probability distribution. This holds even if E is a continuous distribution. I won't show you that because we're not going to go too far off. We'll stop at kind of high school calculus. Sound good? All right. All right, so now you know Jensen's theorem, and hopefully, you'll always get the inequality the right way. And the reason you'll always get the inequality the right way is you'll draw the picture of the function and see the chord is always above it. Which one must be z? Which one must be f of z? f of z must be below the chord of the function, and that's exactly this. Cool. All right. Now, everything is defined in the literature traditionally for convex. If you take convex analysis, it's the way we define things. We actually don't want to use a convex function here because we're maximizing likelihood. And this is just notational pain, right? Like, if we were-- maybe we should have minimized unlikelihood. I don't know what we should have done, but this is where we are. So we need concave functions. And what are concave functions? g is concave if and only f minus g is convex. All right? So we flip it upside down. OK? The prototypical one that we'll use if g of x, for example, is equal to log of x-- here's my picture of log of x, probably not very good-- is that's going to look something like this, go off this way. And notice that if I take a chord of this function-- that's a chord-- it's below-- chord is below, right? Which is what I should hope, right? If I flipped it upside down, the chord would be above. Cool. Now, also, there are functions that are concave and convex, right? So what if h of x is equal to a times x plus b? It's a line. Chords are both no longer above and below. It's actually concave and convex. Linear functions are concave and convex. OK, that ends the detour. Let's get back to machine learning. So now we have the tools. Just to make sure, what do I care that you got there? You got this way that as long as we were dealing with probability distributions, no matter which probability distribution we took, we have this inequality. We can get lower bounds. We're going to use that in a second to draw some curves of a likelihood function that will hopefully be easier to optimize than the original function. And we'll try an iterative algorithm that will look exactly like we talked about before. And then we will conceptualize it is we solve for some hidden parameter. We solve, and that gives us an entire family of possible solutions. We solve on that, and we iterate. Let me draw the picture after I give you the formal set, OK? Oops. All right, so EM algorithm has max likelihood-- I'll actually put MLE. All right, so remember, this is the max likelihood formulation. There's some theta that lives out there. We have some data, i from 1 to n. These are our data points. We take a log of the probability that we assign to the data given our parameters. So this is a way for us to compare different parameters. And recall, these are the parameters, params, and this is the data. So far, so good? All right, now, we're working with latent variable models. So latent variable models mean that P has a little bit of extra structure. P(x; theta)-- this is a generic term, right? This is just one of the i terms-- says the function factors this way. Looks like a sum over z, where z is our hidden or latent variable, P(x, z; theta). This is a latent variable, right? So remember, z was our GMM latent variable, the cluster probability, right? So we have to sum or marginalize over all the possible choices of z. This is basically saying, I don't know what z is. I have some probability distribution that I can compute over my data and z given theta. And since I don't know what z is, the term is-- I marginalize it out-- means I sum over all the possible values. And this will get me back a probability for x, right? This is a sum over all possible z's. This will leave me with a probability for x, or sorry, probability for x. Is that clear? Yeah, ask a question, please. So where is the z going to go again? Like, is that property of the parameters [INAUDIBLE]?? Yeah, wonderful question. It's a property of the model. In a real sense, when we make a modeling decision, and we say, there exists some structure out there. Like, there exists a probabilistic assignment between photons and point sources. One version of the prior is, I tell you exactly where every photon comes from. That's clearly a very strong prior. If you knew that, Godspeed. Go do it. You just solved GDA. If instead what you know is there exists some mapping that's out there, then that structure that you're putting into your function-- and what I'm saying is that mathematically comes down to baking exactly this in, OK? And this is the mathematical form for all of those latent variable models. So when we have that idea about latent structure, we'll eventually put it into this mathematical form. And we'll see a couple more examples. Wonderful question. In GMM, this was exactly the z there. The notation isn't an accident. It's the same z. [INAUDIBLE]-- Go ahead. --example of this? Yeah, so the example that we had was basically the whole lecture where z was the probabilistic linking between sources and photons. Yeah, yeah. So that's one. We'll have more examples later. But I want to get through the algorithm in this abstract form, and we can shoehorn more things into it. And what I'll do afterwards is put GMM right down in this language. We need a couple more things. Please. Yeah, correct me if I'm not understanding this correctly, but-- so z is the probability that a point is coming from a particular cluster. What is it? Like, what is probability of x parameterized by theta actually represent in this case, in that photon example? Yeah, exactly. So remember, if you-- I think it was said yesterday by someone here on that side of the room. So I don't know if that's spatial recognition helps you in the last lecture. But it was like, imagine I was guessing all the photon models that were out there. So each one was parameterized by some choice of z(i). And then what I'm thinking about is what I want over that is that, across all those thetas, no matter how I instantiate z, each one gives me a different probability distribution. I can sum them up, and that tells me, given this theta, no matter how z is assigned or marginalized across all ways that it's assigned, how likely is the data? So P(x; theta), we've been using forever. We used that from the supervised days. We just inserted z and said, well, there's this wild z that we can't observe, but it somehow constrains x. It means that x-- like, the relationship between theta and x. And that's what the model does. Awesome question. Very cool. These are wonderful questions. I'd much rather answer these than badly draw the pictures that come next. We're going to get to those pictures no matter what, so there's really no saving us. All right, let's get to the bad pictures. All right, so I'll try and leave this more or less on the screen. Here's the algorithm. This is a picture, which maybe won't make perfect sense to start with, but we'll get there. I'll go. All right, so remember what a loss function looks like. I'm drawing everything in is in horrible high dimensions. My axis is theta. And then what I have-- and I apologize. I will use a bunch of colors. I hope this is OK for people to see. If not, let me know. Doesn't look like the most visible color but doesn't look like the least visible, and I need a couple. This is my loss function, l(theta), OK? So this is-- I'll write that in black there. This is l(theta), OK? So this is my loss curve, OK? This is l(theta) here. Now, remember, it's not a nice concave or convex function, right? We wouldn't expect it to be. We would hope, because we're going to minimize it, that it's concave. That would be nice. If it just looked like this-- like, oh, that'd be so great. We would just climb to the top. But we saw in a lot of the problems that we were after, it doesn't look like that. It has these kind of weird bends. So we had to settle-- that's another way of copping out and saying, we had to settle for these local iterative solutions. That's all we're after. We settled for that in KMM, for k means, and we're going to settle for that in GM, OK? So how does the algorithm work? We start with an initial guess. Now, again, you could ask-- these colors seem harder to read. You start with an initial guess. So theta, let's say, at time t-- so it could be time 0, right? This is just the initial guess, whatever it is. Then what happens is this is mapped up to here, which is l of theta t. I haven't written anything. I'm just giving notation. This is just the value of the loss that I currently have. I suspect there's something up this way I'd like to get. So how do I do it? What's the algorithmic piece? What I'm going to do is I'm going to form-- so the problem is optimizing over all those z's seems daunting, directly optimizing the l's. So instead, what I'm going to do is I'm going to come up with a local curve, OK, and I'm going to call this curve Lt of theta. It's another function. I'm only drawing a piece of it, but it goes the whole distance, right? It's some curve. Now we'll pick Lt, usually, to be some nice convex function, something that's easy to optimize, right? So we're going to try and get that kind of easy-to-optimize function. And then what we're going to do is we're going to optimize that function. We're going to find its local maximum. So it's local maximum, for the sake of writing, is, let's say, here. And then we're going to set that to be theta t plus 1, OK? And this is now l of theta t plus 1. And we're going to, again, create some new curve, Lt plus 1 of theta, based on that point, OK? And the key aspects of the point that I'll write in a second is this point is a lower bound. This curve is a lower bound. It's always below the loss, so it's kind of a surrogate that I'm not overestimating my progress, and it's tight. It meets at exactly that point. So if I did happen to have the actual optimal value, it would meet at that point. So I wouldn't think and get fooled that there was a higher loss function somewhere else. Let me write those two things, OK? So first, Lt of theta is going to be less than l(theta). We'll call this the lower bound property. Lt of theta t is going to be equal to L of theta t-- sometimes call this the tight property, OK? Our hope is Lt is easier to optimize than l. So this picture-- the content is we're picking these-- like, that was a really bad drawing of one, but these picking these concave kinds of functions, which are easy to maximize, right-- that's what I mean by it kind of looks like a supervised thing. Then we maximize that, and this is formalizing the back and forth. We take that new maximum that we have, which is our new best effort of parameters. And then, [MOUTH POP] we then do it again and create another curve. Now, the way we're going to create that curve-- you're going to see in one minute. It's going to be Jensen's, and that's the whole algorithm. So I'll sketch the algorithm. You don't have math to talk about the algorithm, but hopefully, it's clear what's going on. Easy-to-train surrogate, and we kind of slowly hillclimb with that easy-to-train surrogate, alternating back and forth. And this is what we were doing in K means. This is what we were doing in GMMs as well. And just so it's super clear, I want to make clear here, phi t plus 1-- this is nothing more than the argmax over theta of Lt of theta-- means I do the optimization on the surrogate curve that I created. Cool. All right. I think that this description, hopefully, gives you some intuition of what's going on because, otherwise, the math is kind of bizarre-looking. But we'll see. So this is the rough algo. I'm just restating what's on the thing. I'm not giving you enough math. This is going to be called, not surprisingly, E-step. This says, given phi t, find this curve, L of t. And then the M-step, and together, EM, given L of t, set phi t plus 1 equal to argmax phi Lt(phi). Cool. Please. Just could you reiterate? Like, why are we not using gradients on the original turbulence? Right, so we could imagine doing some kind of gradient descent here, but it's not clear how to deal with this marginalization that happens in the middle. So if we did some marginalization or some sampling, we could do something that looked like that, but it's because we have this decomposition. Note we have-- you can also imagine that we have a decent solver for the inner loop because it's this nice-to-solve thing. I would say, over time, this split of what's nice to solve and what's not-- right now, I'm pitching it to you as, it must be concave, and so it's nice. But this kind of just means I have an internal solver that's fast and I kind of trust, and I have something on the outside that's a latent variable that I'm like splitting up the modeling. It's one of a number of decomposition strategies. Doesn't mean it's the only way to solve it, though. Wonderful question. Cool. All right, so the question is, how do we construct L of t? And I claim we know everything else. So we'll come back to that claim in a second. OK? So let's look. It's going to go term by term. So let's look at a single term in our equation, OK? All right, so I'm going to grab one of these characters, just one, and work with that, OK? So how do we construct it? So right now, we're trying to understand how to create this L of t from this function. And you should roughly be thinking because I told you that Jensen's will have something to do with this. Now, what we're going to do to put it in the form where Jensen's could be used looks wholly unmotivated, OK, totally unmotivated. But it's to shoehorn into what we're doing, and there's some motivation, but it's kind of opaque, let's say. What I'm going to do is something which, at first glance, seems strange. Now, this is true, formally, for any Q(z) that I pick, OK? Please. Would you just go a level [INAUDIBLE]?? Oh. Right? So here, I'm just introducing Q. This is true for any Q, right? Let's not worry about support issues, but I'm just putting in something that divides by 1-- seems sort of unmotivated to do this. Now, I'm going to only consider-- we get to pick Q's. So I'm going to pick Q's, so that I can use Jensen, such that it's a probability distribution over the states such that the sum over Q(z) equals 1, and Q(z) is greater than or equal to 0. OK? And I'm going to call this property star, OK? So I'm going to pick Q as a probability distribution. I'll write that in a different color. OK, why? Because now I can make my argument one line. That's the real reason. So how does it work? Yeah, good. So we have this character-- copy-- in here. This can also be written-- oops, I don't want to use blue. This can also be written as an expected value, where z is distributed like Q of this weird-looking quantity. Why is that? Well, it's just the definition of expectation. This is just symbol pushing. There is nothing deep going on. But it's important symbol pushing because it means Jensen's applies. Oops, log of this thing, sorry. Dammit, I forgot a log. OK. I'm just transforming this thing internally into this notation. Yeah, please. What's Q? Q is this function that we picked up here. So Q is just some probability distribution. And this is going to define our curve. Just getting a little bit ahead of ourselves, we're going to allow-- the curve is going to be parameterized by whatever probability distribution we want. So it's our degree of freedom. I'm just telling you something that's going to hold no matter how I select the probability distribution. The tool that I have in my arsenal to do that is Jensen's inequality. Now I've turned this into an expectation, and in one line, I'm going to be able to turn it into a lower bound that works no matter how I pick it up. Graphically, what I'm doing-- sorry to confuse folks who are copying-- is basically show how to construct this Lt that's always a lower bound everywhere, and that's where I'm going to use Jensen's inequality. So let's see that next line. We'll come back to this. So this is less than. I can pull the expectation out. P(x, z; theta) over Q(z). This is Jensen, OK? Log is concave. This is equal to some Q(z) times log P(x, z; theta) Q(z)-- again, just symbol pushing, OK? So there's only one content line here, OK? The key holds for any Q satisfying star, OK? OK? No matter how I pick the probability distribution, this chain of reasoning goes through. Please. [INAUDIBLE] always the first-- the second value? Like, how did you convert the lower bound, that thing, and do certainly the expectations? Yeah, so this was exactly Jensen's inequality. So if I scroll back up, this was Jensen's inequality. But because I was applying it to the negative of it, it's exactly the same piece, but it reverses the inequality. And so I'm just directly applying that reasoning. Well, I mean, like before, I was like, how are you converging? Oh, this thing into this thing? Yeah. Yeah, sorry. So this is just because Q(z) is a discrete distribution, and the definition of expectation is, this is a bunch of numbers that sum to 1. So this is an expectation with respect to some distribution, in particular, the one where z(i) occurs with probability Q of z, z(i). That's it. It's, again, just symbol pushing. Please. [INAUDIBLE] distribution of z, it is going too far. Therefore, 1 plus Yeah, so you want to know how we ground it into an example. Is that what you're asking? But isn't phi completely [INAUDIBLE]?? No, no. So there's no phi here. So apologies if there's something difficult to read. There's a theta here. There's this new Q that I've introduced. Q is something that I've artificially introduced. And I'm just saying that all I've shown here is that I have a way of-- if you pick a Q that satisfies this, I have a way of lower bounding this function, getting a family of lower bounds to it. And I'm trying to give you the intuition of why I might want to do it. It's so that I can construct those curves that come later, because now this function is going to be much nicer to optimize, but we haven't quite gotten there yet. OK? So this whole thing is-- this gives a family. This is just what I was saying there. So you're right ahead of it. This gives a family of lower bounds. Namely, this is how I get Lt theta less than l(theta) because, term by term, it's going to be less than or equal to. Now, it doesn't satisfy all our requirements, because we have to make it tight. So how do we make it tight? That's the next piece. But right now, I have a way of going term by term from the likelihood function and getting lower bounds at a particular spot. And it'll be a lower bound no matter where I am, OK? But I have to pick a certain Q to make this operational. That's the piece. So I have freedom to pick Q, and I'm going to pick a very specific Q, and that's going to give me a lower bound. And that's going to allow me to get the curve. Go ahead. Would then start and then Q has to be greater than 0? Yeah, yeah. So I said I was going to ignore the support. We can imagine just for the sake of this lecture that it's strictly greater than 0, so I don't run into weird things about what I mean by divide by 0. Here, because I'm controlling the multiplication ahead-- ahead of time, it does make sense, but you're right to point that out. So just think about it as greater than 0. Yeah, wonderful question. Cool, all right. All right, so now how do we make it tight? So what we have to do-- the intuition here is that we want to make Jensen's inequality tight. And the idea is if what's inside is constant-- imagine there was a constant inside, that this term was constant for all the different values of z-- then the expectation clearly doesn't matter, right? So if they were all the same, then these two would actually be equal to one another, right? This is some value, alpha, and then you would get a sum over all the alphas that were there. They would sum to 1. Boom, done. If they all have the same value here, alpha, they would be here in the log, and they would also sum in the same exact way. So as long as this term is a constant-- that is, it doesn't depend on z-- I'm in business, all right? So what that means is I want to pick Q such that log P of x, z; theta over Q(z) equals C. Now, before, I had all kinds of freedoms to pick whatever Q I wanted. Now this is where the probability comes in. So Q(z) has to be related in some way to P x of z for this to work. Go ahead. What is c? Ah, what is z or c? Oh, c is some constant. This is a constant independent of the-- just for some constant c. It does not depend on z, independent of z. We don't care what its value is. We just care that it doesn't depend on z in any way, and then it will be exact equality. Then Jensen's will be equality, OK? All right, so what is the natural choice? Well it's that Q(z) should equal P of z given x; theta. Why is that? Well, this is also equal to-- so this is because P of x of z of theta equals p of z x; theta P of x; theta. So if I plug these in, they cancel out. And c is equal to log P x of theta, OK? So let me make sure this is clear. Note-- this just means "note well," and B-- I just use it reflexively just to signal. Q(z) does depend on theta and x. So we're going to have this notation, Q(i) of z because it depends on each different value. So each data point is going to get its own different Q, which is the log of how likely this thing is, OK? And we picked those for each i. So because we did this term by term, we can pick that Q-- Q1, Q2, Q3, all different. And we pick them all so they satisfy this equation. OK? This thing has a very famous name, so I'll write that while I kind of stall for more questions. So what we've defined here is called the Evidence-based Lower BOund, or the ELBO, which actually is a fairly-- like, if you say ELBO to a machine learning person, they actually know what it is. It's not something we're making up. It's a real thing. So the ELBO of x, Q, z equals the sum over z Q(z) log P(x, z; theta) over Q(z), OK? And what we've shown is that l(theta) is less than or equal to-- or is greater than or equal to the sum, because we did this term by term, of the ELBO of x of i, Q of i, theta. Sorry, this is incorrect notation. This is theta. Sorry, the z is marginalized away. The z can't appear there. Only thing that appears there, OK-- for any Q(i) satisfying star. Sound good? That was just restating the lower bound. All I said is we went term by term through this thing, so it holds for every term that we can pick Q(i). As long as it's a probability distribution, it's a lower bound. And then we also showed that l(theta) t equals some i 1 to n ELBO x(i), Q(i), theta(t) for the choice of Q(i) above, OK? So hopefully, that picture makes sense. Again, just to recap what's going on here, we have this opportunity to pick these bounds, and we'll use them in a second, so it'll hopefully become more clear exactly what we're kind of optimizing for here. What we're going to do is we'll see how we pick the Q(i)'s and all the rest in a second. But this is basically saying that it satisfies the two properties that we had before. We're going to pick where we are on the curve. We're going to find this upside-down bowl-shaped thing. We're going to then optimize that thing in a second, pick our new theta(t) then repeat and do another curve that's present. All right, so let's do the wrap-up and state the algorithm now with our newly hard-earned language. Yeah. [INAUDIBLE]? Then we need to find a lower bound. Oh, no. So these are both on the original loss. These are just saying, this is the Lt here, capital L. Each one of these is capital L, basically, right? And then this one here is saying that, at that particular point for that t-th instantiation, this is where we are. OK. Yeah. All right, so the wrap-up is as follows-- this is how-- we can now write down the algorithm and the kind of full generality with mathematical precision, although it may still be a little bit opaque. We set Q(i) of z in the E-step equal to the probability that z(i) given x(i) and theta for i equals 1 to n, OK? So this says that you're going to pick the Q(i) distribution that says, what's the probability that's most informed or the exact probability that comes from your model knowing the data and the current guess of your parameters, right? So you have some theta at some time. You plug it in. You know the data point that you're looking at. You condition on that. And you say, what are the most likely values of the cluster linkage-- as we were talking about before, the source linkage-- for this particular point? You get a probability distribution over those. You set them to Q(i)z. That's really what's going on. It's your estimate of how likely that is. Then you take an M-step. Theta t plus 1 equals argmax over theta of Lt(theta), which equals-- so Lt(theta)-- sorry, I'll write it like this. Lt(theta) equals this ELBO sum. x(i), Q(i), theta, OK? Your current guess of parameters. So basically, what it's saying is, you give me a current guess of parameters. I get the lower bound that's underneath the covers. Then I optimize that lower bound surrogate. I get the theta t plus 1. That gives me a new guess of parameters, which defines-- you get a new curve, a Q(i) for each one of what's going on, then I go back. Yeah, please. Is it a ELBO down there-- is that the Q(i)? Q-- oh, sorry. Yeah, good call. This is Q(i), and this is theta. And I'm inconsistent with the semicolons too. So you move this. So there's a good visual distance. This is an x. This is a Q. This is a theta. Awesome. Please. More a notation question, I guess. Do you use that equation [INAUDIBLE]?? Right. Yeah. So it's just as before. t starts at 0. We have that initial guess, and then we go from there. Theta is our current guess. Cool. All right, why does this terminate? And it's basically for something that's kind of not very interesting or satisfying, but it does. This gives you a sequence that is monotonically increasing or nondecreasing, OK? So it's possible that it would grind to a halt. But eventually, it has to be strict. There are some other things about how fast it terminates. But it's a monotone sequence, so we'll have a convergence of subsequences. That's really all that matters, OK? We don't say how fast it converges. It's a separate issue. Is it globally optimal? Well, no. No. Just look at the picture. And so to derive a counterexample, you would just find a likelihood function that had those two bumps. And you would run it in that particular lower bound setting. And what it will do is it will gradually hillclimb. And this is actually not great. Like, it can't go back downhill, right? It's got to just continue to go up. If it gets locked inside one of those bumps, it's kind of toast. OK, so in summary, what we saw here is we derived EM as MLE as promised, OK? So just to recap what happened here, we started with this notion around Jensen's and convexity. So we looked at a pictures of convexity, and we got an intuition of what sets are convex and what sets are not. We wanted to use concave functions, which are these kind of downward-facing things. The chords are always below them, OK? Those are the loss functions because we wanted to maximize them. The reason that was important is we had to do this back-and-forth iteration. Given a set of parameters, we were going to find a surrogate. That surrogate was going to be concave in our setting. It's going to be one of those nice functions that we were after. We would use Jensen's inequality as a way of constructing that entire curve. We needed the entire curve because we wanted to optimize it. So it wasn't enough to find a point in a lower bound. We needed to find the whole thing that was underneath it so we could run our argmax step. And that was the setting where we would learn all of the parameters and estimate that in a way that was hopefully nice and easy to do, which was like, estimate the means and the variance of the data that we're given. We'll run through an example of that. This is a necessarily kind of abstract and confusing algorithm. The best way to understand it is just to run it through a couple of the different examples. EM and the next one-factor analysis-- and by the end, you'd be like, oh, OK, that makes sense. It's a lot of notation because we're abstracting out a huge number of things that we're doing. But in the end, it's not so bad, right? You take the Q(i)'s. And this way, set the thetas, do a descent on them, or ascent in this case. Do argmax. OK. All right, so let's see it for our Gaussian mixture model. Please. For this, [INAUDIBLE] this termination condition event. Oh, so the termination condition is not really important, or in the classical sense. The thing is that it's nondecreasing so that, eventually, there's a convergent subsequence of it. It's not telling you how fast it converges, for example. And it converges for the same reasons that GMMs converge that we were kind of going downhill, if you remember it, every time. Like, there was some loss functional that was decreasing. And this is just saying there's something that's continually increasing. When do you terminate it operationally, like you're running this algorithm, when do you decide? You look and see if the loss of the likelihood function is not changing too much. What does too much depend on? Depends on your data, depends on the problem. Like, sometimes if you have a huge amount of data, and you're averaging over billions of examples. Sometimes if you have only a small amount of data, you want to get to machine precision and 10 to the minus 16. And so that's the way you decide when to do it. This just says that it's not going to oscillate wildly. It's a very weak statement I'm making. Yeah, please. Can you explain what specific part of this is linked to the MLE sort of aspect of it? Oh, awesome. Yeah, so we're going to see the MLE when we actually do the computation here. The reason it's linked to MLE comes from a very simple piece, which is we started in this model, where we were saying, the way we're going to think about the world was to maximize the likelihood. And that was how we think about our data. That's less disturbing to this group than it is to, I guess, generally worldwide who think about this, because this is the only framework we've used in the course, but that's what I mean. We started with l(theta) as what we were optimizing, and then we derived this as a set of concerns. We didn't get to a global optimum. So I don't mean that we definitely guaranteed that we got the maximum likelihood estimation, just that you can phrase what's going on as MLE. And so when you get into other estimation problems and the subproblems, you just apply the MLE stuff you learned from the first half of the class. And we'll see that in an example. Does that make sense? Yeah, yeah. Awesome. Thank you for the question. Please. Why is this tight? Which one? Why is this tight, the ELBO you constructed? Oh, it's tight because we went through this small piece here, which was that if we selected it as a constant in this particular way-- so before we could pick any Q and it was a lower bound, as long as we did this, then actually this line was no longer an inequality but was actually exact equality. Oh. And it depended, though-- that selection of Q depends on theta and x. [INAUDIBLE] Awesome. Great question. Yeah, and that's just making sure that the picture in your head is exactly right. We go up to the loss curve. We get something that's underneath it that touches at that one point. And then any optimization we do there is actually also optimization on the loss curve itself. Cool. All right. EM for mixtures of Gaussians, or we call them GMMs, sorry. All right. All right, so what's the E-step? Huh? Yeah, I'm just going to copy down the thing. So let's get the generic algorithms. Let me get the generic algorithm. All right. All right, just so we have it on the screen. So here's our warm-up-- not really a warm-up because we're almost out of time, but here's-- remember, if we saw how this worked-- P x(i) and z(i). And remember, we factored it as the following-- this is just Bayes' rule, nothing crafty going on here, not tricky. z(i) was a multinomial. OK? This means phi i greater than And this was, remember, our N cluster j. And so then once we knew-- given z(i) equals j-- that every cluster had a different shape-- so we had a different mean, mu j, and a different size or variance, mu j. And I'm doing everything in one dimension, but in two dimensions, you would have actually the whole covariance would be different. These are the cluster size descriptions, cluster means, OK? All right, z(i) is our latent variable. All right. So let's take a look. What does EM actually do here? So what is EM? EM is very general. You can instantiate it, right? So what does it mean here? So Q(i) of z is going to be equal to P z(i) equals j given x(i) and so forth, OK? Now, what actually happened here when we wanted to understand-- what was the probability? This says, the probability that i, the i-th component, belongs in j given what we've observed about x(i) and what we know about the cluster shapes and their frequencies. So if you remember, we had this diagram that I drew quite poorly the last time that said we had these two bumps, which were our two Gaussians, let's say, in one dimensions that looked like this. This was mu 2 sigma 2 square. This was mu 1 sigma 1 square. And the question is, you give me a point here. This is my x(i). How likely is it to belong to 1 or 2, to cluster 1 or 2? Right? That's basically what we're asking. What's the probability that at this point, this i-th point here, comes from 1 or 2? Now, remember, if we just looked at this, and these two distributions were-- or the phis were equal-- that is, both sources were generating the same amount of information-- then we would say, oh, it's probably much more likely it belongs to this function, cluster 1, than cluster 2. But if we knew, say, on the other hand-- if we knew phi 2 was hugely bigger than phi 1, right-- a billion points came from the second source, and only one point came from the first source-- we'd probably say that it's more likely that it would go to this, right? It would certainly boost its probability. So now the question is, how do we automate that reasoning? And that is Bayes' rule. More likely in 2. So to automate this, this is Bayes' rule. This is all we did last time, Bayes' rule. It just weighs those two probabilities and tells us what should happen. That's it. We ran through exactly those calculations last time. All right, let's take a look at the M-step now. In the M-step, we have to compute derivatives. I want to highlight only one thing here because it's something that causes people pain when they do their homeworks. We have to compute derivatives. So we're maximizing here over all the parameters, phi and mu and sigma, sigmas, sorry, all the covariants. So these are the sigmas, lowercase. And the notation above-- these are all theta or all theta, right? So theta refers to all the parameters of the problem. We were breaking it out into mus and sigmas and phis. So those are all the things we're observing. Everything that's nonlatent, that's observed, not hidden to us. And what we are maximizing over from our ELBO lower bound was this, sum over z(i) Q(i) z(i) log P x of i, z(i), theta over Q(i) of z(i), OK? This whole thing, we're going to call fi. This is fi of theta. It hides a ton in our notation, all right? So this thing is-- let's write it out because the gory details will help us. Oh, please. You have a question. Do you mind defining what is latent and what is not? Yeah, so in our terminology, z is just latent. So I'm giving you the intuition that it's something that's hidden or not observed. But formally, it's just going to be anything that's a z. z is latent. That's our definition. Yeah. Please. So the fi is just like the ELBO [INAUDIBLE].. Exactly right. Yeah, this is exactly the instantiation of what we had above. We reasoned about this through ad-hoc reasons last time, but it is exactly the ELBO that we're now going to minimize with derivatives. And to make it concrete, I am either going to waste a bunch of your time or something will snap in your head and see how these things put together. I'm going to write out exactly what fi(theta) is so that you can see what the derivatives are that you will compute on this thing, because right now, it's probably pretty mysterious to you. Like, there's ELBOs, and there's P's, and there's Q's. And you can just write this thing down and compute its derivatives, and that's what you do. I mean, that's how this whole method works, just abstracted three orders of magnitude more than it should be, OK? So let's see that piece. Oh, please. Sorry. The z(i) that said we're summing over? Yeah, that's going to be-- so I'm just using that notation to make sure it's clear that it depends on the i. It's actually just a z that you're summing over. And it's summing over, for example-- like, we're imagining that it's discrete to make our notation a little bit nicer. It would be summing over all of the different clusters that are possible there, all the different sources. How likely are you to be in cluster 1, 2, 3, 4, 5, so on? You could also-- we'll see later-- replace it with an integral if you had something really fancy that was there, like if you had a continuous distribution over the hidden states. Yeah. [INAUDIBLE] z, to similar to what phi does, like if phi is greater than [INAUDIBLE],, what sorts of [INAUDIBLE]? Ah, yeah. So it is, in fact, because of this right here, which I kind of glossed over. Q(i) is exactly setting that-- is setting this function. So I glossed over this really, really quickly because it was the same calculation we did last time. P z(i) j-- to compute that, remember, we expanded it by Bayes' rule. We had two different components. We had, if you knew you were in a cluster, how likely is the data point? And then we had a term that said, how likely is the cluster? And those were the two functions that we put in and broke down by Bayes' rule. It's exactly the same. You've got it perfectly. Yeah. All right, so let me write out this monstrosity just because it will be potentially-- it has been in the past educational. Who knows if it's educational in the future, and the future being, like, two seconds from now? All right, I'm going to use a notation, and hopefully it doesn't confuse you-- Q(i) equals z j. So this is the piece there. So this is the weight. This w(i) is the same w(i) we had before. I'm sure you're intimately familiar with all the notation I used in the GMM lecture, but it's the same w(i)j that we had before. It's the weight that summarizes this probability, just so I don't have to write that whole thing out, OK? All right, so fi of theta is going to be equal to the sum over j-- because now I'm summing over the cluster centers, right? The z(i) notation was still very abstract-- wj(i), which was summing over this part here, log-- and help us all, 1 over 2 pi-- this is a covariance, 1/2. This is the exp of Oh, I decided to write this in four general things. Why do I care about that? Oh, I see why. OK. Transpose sigma inverse x(i) mu j times phi j. Oh, I just missed it. Oh, that hurts. All right, let me scoot. So much better here. On a whiteboard, that's really catastrophic-- phi j, OK? Let me make sure the brackets are clear. I'm going to highlight the brackets like it's a syntax editor. So make sure they're all there where they're supposed to be. This blue goes with that one, so, OK, great. Ah, and that means I'm missing a log. Perfect. All right. Not so bad. Oh, and this whole thing is unfortunately-- snap 2? Yeah-- over w(i)j, right? That's just this piece is this piece. This piece here-- this is the probability. This is the Gaussian, remember, from our model. Let's go back up here. Sorry for all the scrolling. This is our Gaussian here. This is a Gaussian distribution with center j. I did use a higher dimensional covariance because it's something you're going to have to compute. So I've gone from 1D to higher dimensions. The notation doesn't change except for this is what the Gaussian looks like instead of a square. You know that, right? And then there's the phi j, which is just multiplied times this horrible expression. And this exp parentheses is so I don't have to write it in superscript, right? Just expo the function, just a bad habit that I always use brackets for this. It's historical, and I would love to beat it out of myself if it were possible. Please. Does the covariance depend on j? Right now, the covariance does not depend on z. In our model, the covariance here is something that-- it depends on which cluster, right? So it depends on j. Sorry, I just want make sure I understand what you-- That's it. Yeah, so I think if you-- it means it depends on j, yes, the covariance could have different shapes. Some could be long and skinny. Some could be shorten and round. Yeah, those depend on j. So this thing here is a very polite way of saying, this guy here should depend on j. Yeah, good catch. All right, so now we can compute some fun derivatives, OK? So let's compute mu j of fi of theta. We have to estimate the mean, right? Now. And I'm going to do it-- actually, I'm going to do something slightly harder. So apologies if you wrote that down. Let's do this. It'll be just one extra line because it's all linear. I'm going to sum over all the data, 1 to n. OK. So what this becomes is sum equals 1 to n. This is over all the data. I get mu j here, mu j, times-- and then it's going to be wj, and I'm going to drop terms inside the log that obviously have nothing to do with mu j. sorry. T sigma inverse j x(i) mu j. All right, and so just so you're clear what's going on here, the log turns these multiplications into additions. So when I take derivatives, this doesn't show up anywhere because sigma doesn't depend on it-- sorry it doesn't depend on mu. And this doesn't depend on mu either, so I'm left with these terms. Please, go ahead. Oh, what is the physical meaning of the fi here? fi? fi is just the term here in a a function. It is the likelihood function after we've picked Q at the particular iteration. So it's just notation so I don't have to write this monstrosity every time. Yeah, but here, it's-- It's the ELBO. OK. It is exactly the ELBO. fi is the i-th ELBO. OK. So when wj has something to do with mu? Actually, it doesn't. I don't know why I kept it. Yeah, w doesn't have anything to do with lambda j. So it should be crossed out. Is that true? Let me show-- let me see something crazy here. No, it shouldn't have anything to do with it. But it will be-- sorry, I see what's going on. This is what's going on. This is 1/2, and this is a minus. w(i)j is multiplied by it. w(i)j is take the derivative of the log. It's going to be this times this thing plus. Yeah. Yeah, sorry. Thank you for the notational issue. Yeah. Cool. All right, we're in business. So what happens now? Well, some mechanics that almost certainly will introduce bugs and you will catch, and it'll be great. That's learning happening there and me making mistakes, OK? So when we actually compute this, this is going to be sigma j x(i) minus mu j. You computed this a bunch of times, all right? So, yeah, all good. So when can we pull this thing out that's repeated? Because it's full rank, we can pull it out, and it's linear, and it doesn't change anything. So we want to set this to 0 and use that sigma j inverse. Sigma j is full rank. And that will become clear in a second why that matters so much, because when we pull it out, what do we get? We get here sigma j inverse times sum, which is an unfortunate collision, i equals 1 to n w(i)j x(i) minus mu j equals 0, OK? But then because this is full rank, the only way that this thing is 0 is if it's identically 0, right? If this were non-full rank-- sorry, the j is in the wrong spot. That's extraordinarily confusing. Since this matrix is full rank, for this thing to be 0 means that this blue part is identically 0. And so what that tells us is mu j should be equal to sum i w(i)j x(i) over sum w(i)j. Yeah. That was before, OK? So, so far, nothing happened. We estimated the means by simply averaging their weighted averages, and we computed this before, and it's just a matter of computing the derivatives. The one that I actually care about showing you, by the way, is phi j, so let me just jump to that because we only have a minute or two left. And I want to show you what happens in phi j. So phi j is constrained. [INAUDIBLE] Please. Would you mind showing [INAUDIBLE] scrolling up? Sure. No, wait. I just want the last one. OK, sure. Also I would say, ahead of time, I do post all the notes online. Please feel free to take our reference to those notes too. They will have potentially fewer typos than me trying to answer questions, draw, and generally be distracted. Can't focus that long. I have to read the notes. They still do have typos, though, so always look at the notes. All right, let me just show this one thing, phi j is constrained, OK? So phi j is constrained, and I just want to remind you of something that you probably learned in high school or freshman year in calculus. I don't actually know when anyone learns anything. Anytime I say something like that, my students always get upset with me, so I should just stop. But I assume you've seen it before this moment, how about that? You need a Lagrangian, OK? No, you haven't seen it? That's fine too. If you want, I'll post notes about how to compute Lagrangians as well. If you haven't seen this before, this will trip you up in some way. So when you compute the derivative with respect to phi j, what happens is you're going to get something that says you have this weighted sum w(i)j times the derivative of phi j log phi j plus-- so if you just take this and compute the derivative, it doesn't account for the constraint. So you have a bunch of numbers that must sum to 1. So if you think about you're on a line-- let's say that you're optimizing on a line, right? If the gradient-- like, let's say that your points are on this line, and you're saying, I want to optimize here. This condition that you could imagine for an optimal solution is the gradient's identically 0, right? It vanishes. That's good. That's a point. But what if the gradient is perpendicular to the line? Like, its wants to push you only perpendicular and has no component moving you along the line, right? In that case, this is still a critical point. It's still potentially a minimum. Does that make sense? Because it's not telling you that there's a minimum to your left and right. It's along the line, OK? So the question is, how do you encode that information that you want to kind of screen off information that's orthogonal to the line? And I'll write up a little note to show this whole thing. What you do is you introduce this thing called Lagrange multipliers. And Lagrange multipliers-- and if you haven't seen them, don't worry. These are super easy to teach. Just say this-- it's just an extra term here. And this multiplier-- it's not obvious in this formulation what it's doing, but this multiplier is basically the thing that screening off things that are orthogonal to these constraints, OK? So this constraint here says, theta j is equal to 1. And you set this constraint equal to 0, this term equal to 0. And it says, if you're going off in a direction that would not change any of their values, that's OK. You get to screen that off. And I'll make that geometric intuition. I'll just post a one-page write-up for you. Please remind me in the thread, and I will definitely do that. If you don't do that, you'll get the wrong answer. That's also a motivation to learn it. And so what ends up happening here is you get something that says, I get sum i goes from 1 to n w(i)j over phi j plus lambda equals 0. And this implies that phi of j is equal to 1 over lambda sum i equals 1 to n of w(i)j, OK? And the lambda is playing a very simple role here. It's just telling you, you have to normalize them in some way, right? Now, since-- in this case, we can do it in an ad-hoc way. Since phi j is equal to 1, this implies the sum of phi j is equal to negative 1 over lambda sum i, j w(i)j, and this equals negative n/lambda, OK? And that's the correct normalization, right? Oh, sorry, this is equal to negative n. Oh, sorry, n/lambda, right? And so now you can then go back and normalize and cancel out the-- divide everything by 1 since this is just-- this sum is equal to 1. I divide everything by 1 here, and that tells me that lambda must be equal to 1 over negative n. This is equal to 1. This implies lambda equals negative 1/n. It's just normalizing. It's just doing the average, which was weird to look at before. But that allows us to compute all the things in the way we would expect. Here, it's totally natural. So if you don't get the general rule that I'm trying to tell you, the reason I'm trying to tell you is I think we make you use it at some point, this rule. So just check. It'll come up on a homework. I don't think comes on an exam. But just flag something when you have a constraint. When you have a constrained probability distribution, you have to use a Lagrange multiplier. That's all I care about that you understand. In this case, it makes total sense, though, because these numbers have to sum to 1. So if you don't have a normalization constant here, you're adding up a bunch of numbers which individually sum up to n, right? The sum over all of them is n. You better normalize them in some way. And this is just the principle that tells you, you have to normalize them by this n factor, OK? So all I care that you take away, if you've seen this a thousand times before, don't worry. I hope you had a nice rest. If you've never seen this before, I just want to flag for you, when you minimize a function that's constrained to make sure you use Lagrange multipliers. I will put up a little tutorial about them. You do not need to spend a bunch of time on them. It's just if you see some challenge where you're actually supposed-- some problem where you're supposed to do it, just have a little light bulb to go off that says, OK, I've got to look up how to do it in this case. That's all I care about, OK? And you'll trace through it in the notes. Please. So I have two questions. [INAUDIBLE] minus [INAUDIBLE] equals to 1 because that is lambda equals to minus [INAUDIBLE].. Yeah, it equals to minus-- sorry, equals minus 1/n because there's a negative sign everywhere. So this minus lambda is going to be equal to 1/n. So I'm just going to write the final expression. Maybe that would be less-- yeah. So [INAUDIBLE]. Oh, sorry, sorry, sorry, sorry. Oh, this arithmetic is what's bothering you. Sorry, sorry. This is true. Oh, OK. Yeah, sorry. I just swapped them, and even that's backwards from how I should do it. I didn't see that. Sorry about that. Thank you for the catch. It's just supposed to average out so it looks the same. Awesome. Any questions about this? All right. So also, why is it [INAUDIBLE]? Because it's a probability distribution. So again, the issue here is phi j is constrained by the model. So if we go back to this model, this is a constraint on phi j. So whenever you have a probability distribution, a multinomial probability distribution, it's not just that the phi i's are nonnegative, which the constraint-- we're almost ignoring-- but it's that the phi i's equal 1. And so you couldn't, for example, set your probabilities to be 0.5 and 0.8, right? They have to add up to 1 here because it's a multinomial. So when we do the optimization, we could, for example, prefer an optimization where we make all the probabilities 1, but that would be an invalid setting. And so that means these phi j's-- we have constrained them to equal to 1. So now the question is, when we do the gradient descent, which is exactly the gradient descent we did before, where does that show up? And it shows up in this extra term here, which is the Lagrange multiplier. This thing is called a Lagrange multiplier. The multiplier itself is called that, OK? And this is the constraint put into the normal form. If you haven't seen this before, it'll look quite mysterious. But what I was trying to do is I'm not going to teach you the Lagrange multipliers in this class. I'll put up something. But the piece is here that it gets you back to an expression which makes sense in this setting. And you needed something to average over because these numbers sum up to something that looks like n. If you just compute it naively, you'll get something that doesn't make any sense. Yeah, please. I think the problem is that, in a lot of sense, phi j equals 1, then the [INAUDIBLE] sigma [INAUDIBLE] of phi j's. No, no, there's no sigmoid here. A sigma for summation. No, in this setting, the sigma is out here. Yeah, but the other one, like since phi j equals 1-- I think what's confusing you maybe is this, that these are not the same j's. No, but [INAUDIBLE]. Which line? Just says phi j i. Oh, oh, oh, I see. I see, I see, I see. Sorry, sorry, sorry. Yeah, I was talking while I was saying. Thank you for the clarification. That was really helpful. Apologies for that. Yes, it's this constraint here. Sorry, this is the constraint that was in our head. Yeah, and it just makes a mysterious reappearance here, all right? All right, awesome. OK, so what is the message that I want you to take away from this? Two things. First message is GMM is an EM algorithm, OK? That's really one of the pieces that is there. And this is interesting because we're going to see-- in the next lecture, we're going to see a different example, which is called factor analysis, where z has a very different form. And it's meant to constrain the problem in a different way. We went through in this lecture a couple of different steps. We started with that convexity piece so we could get an intuition for what these functions look like. We didn't want to use convexity. We used concavity. And then we went through the EM algorithm, which we formalized as kind of back and forth with using these curves over time. Once we had those curves, what was happening was we would pick and optimize on those curves, and we were getting these Q(i)'s, right? The Q(i)'s played a starring role. Those became our w's here, and they kind of add nastiness to all of the equations but not a tremendous amount of nastiness, right? They just add little weights and expectations everywhere. And then we ran exactly the kind of standard supervised machine learning, if you like, or the stuff that you've been doing for MLE for the entire quarter on those properties. Then we introduced a ton of typos to keep you on your toes. No, I introduced a ton of typos because I was talking while I was writing, and I shouldn't have done that. And so then we saw the two things that I cared about to highlight. One is you know how to find means, and these are just weighted means, and you should run through that calculation to make sure you know how to do it because I'm almost positive we will ask you to do it at some point in the near future. And then the second thing that I would tell you to do is when you have constraints, you have to know how to optimize them. You don't need to know the general theory of how you optimize against nonlinear constraints, but you should review how to do this when you have something that sums to 1. It's not more complicated than what I wrote here, but make sure independently you go through it and ask questions. I'm happy to ask questions. I'm happy to point you to different resources. In the next class, as I said, what we're going to see is this notion of factor analysis. And that is going to tell us how to apply EM to a different kind of setting, which, at first glance, will look kind of impossible to do without a latent variable model, and it's a pretty interesting scenario. And I think that's all I want to say. Any last questions before we head out? I'll stick around for a couple of minutes as usual. Thanks so much for your time and attention. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Selfsupervised_learning_I_2022_I_Lecture_16.txt | today, we're going to talk about self-supervised learning. This is a lecture that doesn't have a lot of math. But it's going to be all about very recent works like probably in the last three or four years the most. And these are pretty interesting, intriguing kind of concepts, but nothing very complex. Everything is kind of simple. Basically, I think, about probably like 2013, 2014, I think deep learning neural networks, this ideas start to take off and we have this revolution of AI in some sense. We have a lot of amazing results in the last about 10 years after deep learning took off. But in the last two or three years, I think we see an emergent new paradigm of AI, which is still based on deep learning neural networks, but I think, in some sense, that kind of the paradigm shift a little bit towards this kind of large scale unsupervised learning or self-supervised learning. So that's kind of what we're going to talk about today. So I guess last of your lectures, I think Chris talked about unsupervised learning. So those are mostly ideas, more classical ideas that was developed like before deep learning took off. And today, I think we're going to talk about the kind of new type of unsupervised learning, which is not that different from the old ones with neural networks. With some more technical differences and and also some differences in the conceptual way. So I think in some sense, this is called like-- actually, we also write a white paper about this, a bunch of Stanford people. Actually, there are hundreds people, more than 100 people in the paper, mostly Stanford researchers, Stanford faculty, and students. We wrote a white paper on this. We call it foundation model. So in some sense, it's just a name. It's a name for kind of this large set of ideas that involves pre-training or unsupervised data and then use it for a bunch of downstream tasks. I'm not sure whether you know any of these passwords, but that's what I'm here for. I'm going to define some of these words and tell you what's the pipeline and the paradigm is. So basically, kind of the main thing for deep learning is that we use neural networks to state larger data than we used to do before like 10 years, 15 years ago. And typically, at the beginning when you use deep learning, you use it for supervised learning, where you have a lot of data, maybe a million data ImageNet, and you have labels for them. And then you pick neural network to predict the label. And it turns out that even though neural network is pretty big, your data is also somewhat big, you can see good results, and sometimes you can even make it work on small data. But all of this requires some amount of labelled data, right? And then, recently, people figured out maybe we should use unlabeled data as well, right? So when you use unlabeled data, then you have much more. For example, for text data, I think there is this data set which has 40 trillion words in it. But if you have labeled it as such, then you probably have a million labeled documents and also a label has specific meanings. Maybe you need multiple labels for documents many times in many cases. So if you say I can train with unlabeled data, then you suddenly have availability of a lot more data. So I think that's the main driving force. Now, we are training on unsupervised like unlabeled data. So how do we train with unsupervised, unlabeled data? I think often people have different words for the same kind of concept. I think one of the way to call it is called self-supervised learning. As you will see, you're going to supervise your model with yourself, the input itself. So you don't have to use any label. And sometimes, as I said, in a white paper, we call this family of models called foundation model. Foundation. I'll explain this word a little bit more. And some other bears worth including pretraining, I'll explain them as well, and adaptation. So I guess I would just start by explaining pre-training. So in some sense, the way that people do this right now is the following. So you have this so-called pre-training stage where you pre-train on large scale. Here, large scale really means very, very large scale unlabeled data set. It's not always the case that you have to use a unlabeled data set. We will see actually, in some cases, you can use the labeled data set for pre-training, but you really have to make it large scale. An unlabeled set is more common than the labeled data set. And also you approach a large model. A very large model. And then the second step is that you somehow adapt this. So adapt this pre-trained model to some downstream tasks. Actually, often, you can adapt to many downstream tasks, not only one. But the adaptation to the task is the same as adapting to one. You just do it one by one. So you adapt to some downstream tasks. And these downstream tasks are often labeled data set. For downstream tasks, you have different settings, I would describe in more detail, but generally you have a few examples in the downstream task. For example, one example could be that you pre-train on unlabeled text, right, that you can download from internet. So you could have a trillion words like documents, like a lot of documents with a trillion words downloaded from data from internet and you pretrain your model on this unlabeled data set. How do you approach it? I'm going to tell you. And then after you get this model, this model is pre-trained with no labels, so it doesn't really solve any particular task. And then you say I'm going to have a downstream task about text. Maybe I care about sentiment analysis. Meaning you care about whether this sentiment or document or sentence is positive or not, right? Maybe for Amazon review where you care about whether the reviews is positive or not. And then you say I'm going to have a small number of examples for sentiment analysis. I have a small number of examples with documents and label the positive or negative label pair, right? So maybe 100 or 1,000 pairs of documents unlabeled. And then that's called downstream task, and then you want to adapt your pretrained model, which is very general, generic, to the specific task using some additional tuning. So that's the general idea. But sometimes, of course, the distinction between these 2-step is that this time, you may still involve some training, some training in this adaptation step because you're going to see the new examples on the downstream task, the sentiment analysis tasks, right? And then you have to do something with those examples. This is a training optimization step adaptation step as well. But the difference here is that this step often involves very large scale data and this generic. It's not about task specific learning. Here, it's really about task itself. And oftentimes, you have much fewer data points than the pre-training step. Sometimes you have you have 10,000, but generally not a lot. Sometimes you can even work with even zero examples in adaptation step. You can sometimes still adapt to the task. So kind of the intuition is that the pre-training step is about learning the generic structure or the intrinsic structure of, for example, text. If you do it for image, you are learning the intrinsic kind of structure of the images, maybe the intrinsic features about images. And then the adaptation step is more about the task itself. For images, you can care about the many different tasks, maybe you care about recognition, maybe you care about classification, or maybe you care about different kind of labels. You can have labels of different granularities. A bunch of different tasks, right? So this step is more about tasks. So that's kind of the general intuition. You push a lot of data so that give you the intrinsic structure or the best intrinsic representations for this kind of data, and that representations are useful for downstream tasks. So that you don't have to use a lot of examples in downstream tasks, right, because you already learned some interesting representations that helps you for the downstream task. All right. So basically, kind of the one implication here is that you expect that this pipeline is going to do better than just training your model only on the downstream data set. So that's the basic goal. The baseline is that you just directly train your model on the task, downstream task. But generally, you don't have enough data for that. But if you do this kind of transferring from the unlabeled data set to this one, then you may do better. And we call this kind of pretraining model also foundation model. So here, I call it pretraining model. You can also call it foundation model. And the reason for the word foundation, I think the implication is that this pretrained model is a general purpose model that can do a lot of different things and that's the foundation and this is the adaptation. I haven't talked about waste yet. Yes. But that's a good question, but I'll talk about those. OK. Any questions so far? I'm going to tell you-- OK, so this is a general idea. I'm going to tell you some-- I'm going to start with some simple notations and tell you how do people generally do this in kind of a more mathematical way. OK, so pretraining. So I guess let's say you have some data, something like x1 up to xn. So you have n data points and n be very big. And there is no label here so this is unsupervised pre-training. Of course, sometimes you can still have labels in it, but let's say we focus on a case where you have no labels and you have a model. So let's specify the input and output of the model. So typically, you can think of this as the model, let's say, is called phi of theta. And sometimes you view this as a feature extractor, a feature map, but it's a learned feature map. So this is something that maps x to phi of theta. Phi theta of x, and let's say phi theta of x is some vector. So I guess you can-- as we have kind of seen, we have this kind of feature map, often you call this representation, features, there are many names for it. Representation slash features. Sometimes we also call it embeddings, especially if you talk to more mathematical people because in math, this word embedding is used in a similar context. So this is a representation of the raw data, right? The raw data could be text, image, and this is a vector. So this is something we are familiar with, right? In both neural networks, we have viewed the last by one layer as the representation. In the feature lecture, this is the feature map. Although in a feature, in the kernel, sorry, in a kernel lecture, this is the feature map. But in the kernel lecture, this feature map is given and designed. But later, when we do the neural networks, this is the learned feature map. Here, we are still learning a feature map. And we are still learning our feature map, feature extractor, whatever you call this. So and then-- but here, you learn this without labels. So you say you have some pretraining loss, maybe some pretraining loss. There are many forms of pretraining loss as well. Here, I'm just only giving one form. Something like probably nothing, maybe like a sum of loss on single example. I'm going to define the loss for different cases. Sometimes this loss can also depend on multiple examples, sometimes they can depend on labels, but something like this. You have a pretraining loss, OK? And then what you do is you just optimize this loss. This Lpre theta. Let's say obtain some theta hat. I'm going to call this my pre-training model. Oh, maybe we can call it foundation model or something like that. OK? So nothing fancy so far, yet I haven't told you what the training loss is. There are many, many different ways of doing pretraining, I'm going to tell you a few ways. But so far, you have defined some loss function, but the loss only depends on the unlabeled data, and then you minimize loss and you get some model. OK, so, and then, I'm going to the adaptation step, which I'm going to be a little bit more concrete because it's a little bit easier. So in the adaptation step, let's first clarify what kind of data you have before talking about how do you do it. So what data do you have? Often, it's a label that asset, it's a labeled prediction task, the downstream tasks. Even though I think if you look at enough papers, there are many other papers talking about different type of tasks. But, here, let's say we have some labeled some prediction task and we have some label data sets. So let's say we have a few examples. Let's call it x task one, y task one, so on and so forth. x task nt, y task nt. And this is the number of downstream examples which is presumably much smaller than-- I'm going to use nt as the number of downstream task examples. So on this one, it's supposed to be much smaller than the number of unlabeled examples in a pretraining step. And there are actually a few different settings. So one setting is that nt is 0. So this is called zero shot learning. So nt zero, it just means you don't have anything. You don't have a data set. So it's called zero shot learning. We'll see how you do it. Sometimes you can still do zero, like you can still do the task without even knowing anything about the task. And when nt is relatively small, I think this is called few shot. And what do I mean by small? I think in the literature, maybe 5 is typically considered small. not more like 50. If more than 50 examples, I think, of course, there's nobody really defined what exactly it means. But I think if it's more than 50 examples per class in the downstream task, I don't think you would call it few shot. Most likely 10. But, of course, there are cases where you can have more examples. I think I've seen cases where you have 10,000 examples or maybe even But the n there, typically, it could be a billion or even a trillion sometimes. A hundred billion. This is the setup. I'm giving this information, and then how do I do it? The first thing I'm going to do is I'm going to first define how do I adapt. I'm giving this model theta hat. What's the model to predict a label? I need to have a model that predicts a label. So here the theta hat predicts a feature vector, but not the label. The first thing is that I'm going to take the call. I'm going to have maybe this one way to do this, which is called linear probe, often people also call it linear hat. Something like this. So kind of like the idea is you probably should be familiar with this. It's just like you take this feature phi theta hat x. And then you apply a linear classifier on top of it. You say I'm going to take an inner product of this feature with some linear classifier W, and W transpose x is my prediction model for the downstream task. Let's say suppose you have a regression model, then you just want this number to be some real number. And if you have classification, then this is supposed to predict the probability of the label above the label. So it's just almost the same as what we do in neural networks where you just have some feature and you apply some linear hat on this feature. And then we will do linear probe, what we do is that, we have defined a prediction model. And then you can define how to-- you can say how do I learn this W? Here, the theta hat here is fixed. So I'm only learning W when I'm doing a linear probe. How do you learn W? I'm going to just tarin W with some loss function, with some loss function. On the downstream examples. So basically I just do something like maybe 1 over Nt and minimize over W and I have a loss function, which is the sum over the loss of the downstream tasks. Assuming you have downstream tasks or assuming you have downstream examples. If you don't have downstream examples, this doesn't apply anymore. I will tell you how do we do it if we don't have downstream examples. But suppose you have downstream examples, then you just minimize the loss something like the loss called l task and this loss function is something that compares the label. And your prediction of your model, the prediction of your model is something like this, right? This task, for example, could be just our L2 loss, mean square loss. This loss could be just the mean square loss or could be some cross entropy loss or some other loss that you care about. And so when you do so-called linear probe, it means that you only optimize W. So you are only optimizing W, which is the vector of dimension m. W is an Rm. So you are just optimizing this m dimensional vector to make your model fit to the downstream task. And this is the so-called adaptation step. Basically, the W is the thing that you want to use to adapt your model to a new task. Questions? [INAUDIBLE] labels are you know downstream-- [INAUDIBLE] So where the label comes from, you will see we have a label in the set. Somebody gave you, right. So you collect the data set as the same as what we did before. But the difference is that you may not have to collect as many as before. Like if you just turn on this data set just from scratch, then you may need more than what you do here. OK. [INAUDIBLE] So I think-- so I guess maybe the question is, what transfer learning comes into play here? So I think you can, in some sense, call this also transfer learning. So transfer learning used to be-- the transfer in its word occurs-- like this term was used way before people have done any of the pre-training. So like, now like I think like transfer learning probably like people started like in early 2000 already, even maybe before that. And at that time, transfer learning means what? It means that if I use pretraining language, it means that you pretrain with the labeled dataset. And then, you do some adaptation. So basically transfer learning means this in the new language. But now, these days, when we pretrain, we pretrain on unlabeled dataset. And another-- maybe another thing is that it used to be the case that with transfer learning, the first thing that you train on is also classification task, which is kind of similar to the final task you care about. So but here-- it's sometimes the reason why people introduce new terms for this is that when you pretrain this task could be nothing like the downstream tasks. I haven't told you what exactly the task is, but at least you can imagine there's not a lot of similarity. Because here, we don't even have labels. It has to be something different. Yeah. But I think you can still say this is transferring. It's not like there's no precise boundary between this. OK. So let me introduce another way to do adaptation. So another very common way to do adaptation is that so-called finetuning. And it's also pretty easy to understand. So basically, your model is the same. But you just-- your prediction is some W transpose phi theta of x. So here, I write theta but not with the hat to indicate that I'm allowed to change this theta. So here, theta doesn't have to be exactly the pretrained model anymore. It could be something that you can change. So then, what you do is you say I'm going to optimize both W and theta on the downstream tasks. But if I just say this, then this just sounds like the standard supervised learning, right. There's nothing different from supervised learning. You are just training some neural network where W transpose phi of x on the downstream task. The difference from that is that you optimize, but you also initialize with initialization that theta is initialized to be theta hat. Recall that theta hat is the pretrained model. So basically, you just train as if you are doing supervised learning, but theta is initialized to be theta hat. And W there is nothing you can do with that network, because you didn't know W before. So W says something new, so just initialize W to be random. So that's so-called finetuning. And you can optimize the same loss or whatever loss you care about. What's the question? [INAUDIBLE] Theta is the parameter for this function phi. Phi is a function parameterized by theta. So that's what-- [INAUDIBLE] No, I just mean that-- this is just-- phi is a name. You can call it f, f sub theta or h sub theta. It's just a name for this model that is parameterized by theta. So single phi theta is the neural network with parameter theta. I just need a notation to indicate this function. I cannot just write theta. Theta and phi theta, they corresponds to same thing. But mathematically I have to write phi sub theta of something to indicate that I'm applying this model on x. [INAUDIBLE] [INAUDIBLE] specified. It could be anything. So I can give a name for it, let's say the name is p. But it's not-- I didn't use it explicitly, yes. But dimensional theta, it could be big. Any other questions? Yeah, it was just relating to the loss function for the training data. I'm having trouble understanding [INAUDIBLE] clustering and the-- So I haven't told you yet at all. So just because there are so many different variants, I have to somehow do it in a top down fashion. So here, for the downstream task, I can do it just because these two things are very-- you can use this for almost, almost every solution. But when you talk about pretraining, I have to talk about, what do you do in computer vision, or you're doing language, at least there are some differences depending on the domain. Of course, there are also recent works that try to unify all of this. But at least I have to somewhat talk about the domains. And that's what I'm going to talk about next, yeah. OK, cool, I guess maybe I'll just do that. So how do we pretrain? So I'm going to pretrain a representation. How do we do that? So let's first talk about the vision settings, or more kind of the classifications, like the standard classification settings. So let's just for a minute think of a vision. So suppose you have some images, right? So there are two types of pretraining. So one type is called supervised pretraining. You may wonder why-- I already emphasize so much that the pretraining should be mostly on unlabeled data. But, actually, for vision, because you have a lot of label data already-- ImageNet is a pretty big dataset-- so you can actually do supervised pretraining as well. And this is just exactly what we have seen before. So you can just-- what you do is you just learn a neural network. Let's call this neural network, maybe let's call it u transpose phi of x, on label that dataset, say, ImageNet. And here the notation is that this is the last layer of the neural network. The last layer was linear, recall. And phi theta of x is all the other layers. It's basically the last, but one layer activation. So phi theta of x just denote what you do in the first r minus 1 layers. And then u transpose means the last layer of the network. And so theta, of course-- so basically, if I have a new artwork-- I don't know whether this-- I think we discussed this in one of the lectures, so in deep learning. So you have a neural network. You have a lot of connections, and eventually you output some y. And you view all of this as your phi of theta. And you view this as the so-called u. And sometimes we call this linear hat, right? So you just do this. This is just exactly as what we do in the neural network lecture. And then you just discard u and just take the phi theta of x, the learned as the pre-trained model. So you think of this as the feature. And think of this as a kind of universal feature in some sense. And the hat is special to the tasks you use, right, to the label they say they used. Maybe you used ImageNet. The hat has 1,000 classes. That's something special. But the feature is something kind of more universal. You just take the features and discard the hat. And then once you have a new task, maybe not ImageNet, maybe some other classification task, maybe let's say now you have a new task which is medical imaging, right, like you're detecting from a scan image whether some person has, let's say, cancer, right? So then you just take this part, this phi theta, and then you apply a new hat. I'm using W as the new hat. You apply a new hat. And then you fine tune all, just a linear probe, on your downstream task. So you remove this hat and apply something else. And then you do some training on the downstream task. So that's the supervised pretraining. If you're-- the medical images are a different size than the ones that it was pretrained on, what kind of stuff we need to do to make it work? Yeah, I think typically if the size are different, you just have to do the upsampling of the-- I'm assuming maybe it might make this lower resolution. You just have to do some upsampling of the, I guess what's the right word for it? You just make the size bigger. I think either you can pass some zeros outside, or you can just maybe replicate the pixels in some ways. There's nothing fancy. Also, you mentioned earlier that if the task isn't similar, it might not work as well. So is that a concern here? So here, at least [INAUDIBLE],, you don't have to concern much because of that, because your, let's say the pretrained images, they have the ImageNet classes. They are all sometimes animals, all common objects, right? But, anyway, you discard the hat. You remove the hat and then you add a new hat. Maybe now your new task, I just have two labels, right, cancer or not, right? You just apply it to that. But so in terms of the method, you can actually apply it when the task labels are different. But whether it will work sometimes depends on how different your new task is from the old task. So typically if you use ImageNet pretraining, you always learn something reasonable about the features. You wouldn't be terrible. But if your task-- for example, your downstream task is really not even common images, that's just some kind of random images, then probably it wouldn't help that much. OK, so here there's nothing really that's fancy. And people did this anyways like in the beginning of the deep learning era. And there are some other kind of like-- and the second one I'm going to talk about is the so-called contrastive learning, which is now unlabeled, like unsupervised learning algorithm for pretraining, or people call it self-supervized learning. So contrastive learning-- so now I don't have any labels. I just have unlabeled dataset. I need to define loss function on unlabeled dataset. So how do I do that? So I need to first introduce a notion called the data augmentation. So a data augmentation is something that, as the name suggests, you are augmenting one example into a artificial example that still makes some sense. So, and typical for images, what you do is you just say, I have an original image. And then you augment by doing some kind of so-called random-- you can do a few things, maybe a random crop to crop a patch of image as the new image. Or you can apply a random crop plus a flip. You can flip the image just with a mirror flip. Or you can do some color transformation. Color transformation means that maybe you make the image darker, brighter. Or sometimes you can just even do weird, more weird color transformation. You change all the brown color to the white color. You can do some kind of changing of the color scheme. And there are many others, some of them are more advanced. And sometimes you can even learn a transformation. But these are the common ones. So, basically, given an image, you can do this random operations, kind of like you can try choose to flip or not. That's a random decision. You can choose to crop which part of the image. Like you can do some kind of random color transformation. So you have some randomness here. And then give an image x, basically you can generate a random augmentation that's called x hat. And if you do this again, you can generate another one, which I'm going to call x tilde. And if we do it more and more times, you can train even more of these augmentations. So this augmentation was used in supervised learning actually as well. We didn't discuss them just because they are low-level details. For supervised learning, what you actually do is that you can, given an image and the label, you can generate the augmentation and then just assume the augmentation is your new image. So you just replace all the image by the augmentation. And every time you do this with a new augmentation, like every time you see this image, you replace the image by a new augmentation, and tune with that augment images. And that seems to improve-- in some sense, you make the dataset bigger in some sense because these are-- every time you augment, you're going to get a different image, in some sense, even though they are somewhat similar, but still you make the effective size of the dataset a little bit bigger. So that's why people use it in supervised learning. But now I'm going to use it in unsupervised learning. Actually, now people call it self-supervised learning. So I'm going to use this to create some kind of supervision or create some kind of unsupervised loss. What you do is you say, I'm going to call this x hat-- first of all, I'm going to define some notation. This is called a positive pair. So positive pairs are augmentations of the same image. So one, you can imagine one property of the positive pair is that these two images, two augment the images, probably should be somewhat similar semantically. Even though they may not have the same color scheme, they may not have exactly the same orientation, they are still somewhat similar semantically. So what you do is you say, you are going to try to make-- you are going to design loss function such that, one, you want to make a phi of theta of x hat and phi of theta x tilde closer. So basically you say, I have two augmentations. One is x hat the other is x tilde. You want these two augmentations to have similar representations in the Euclidean space. So that's one goal of the loss. I'm going to tell you exactly what the loss looks like. But this is one goal of the loss. But if you just have this goal, you can see that this loss function is a little bit questionable in some sense because maybe you should just map every image to the same point. Then everything has exactly the same representation. Then you satisfy this goal. So, ideally, you should have something that counterbalance to avoid you to just collapse everything into one thing. So how do you counterbalance? The way you counterbalance is that you, just to say, you want random images to have different representations. So let's erase something here. [INAUDIBLE] So, basically, to counterbalance this, the second goal is that you want-- suppose you say, I'm going to have some two random example. And so you have maybe x and z, so maybe a cat and dog. And then you augment x into x hat as the same way here. And then you augment z to z hat. So because x and z are two different images, maybe one is cat and the other is dog, so the augmentations probably looks very different as well. And the augmentations are semantically very different. So then your second goal is to make phi theta x hat and phi theta z hat, the augmentations of two different examples, far away. So this is to counterbalance, to avoid you to collect everything to the same point. And, actually, there's a name for this pair. It's x hat, z hat is often called either a random pair or an active pair. At the very beginning, I think people call it negative pair, which is not exactly right in the sense that x and z are just random choice of two images. There's no guarantee that they are exactly-- they don't have the same class or same meaning. There is some small chance that both x and z are both from the same class. You may have a thousand classes. There's still a little bit chance that x and z are from the same class. But in most of the cases, x and z are from different classes. And they are semantically different. So random pair might be a little bit more accurate, but negative pair is what people call it at the beginning. And I think now people just use these interchangeably. So, basically, you want the random pairs to have different representations. And you want positive pairs to have similar representations. So this is the design principle. How do you do this exactly? So there are much more papers. There are at least four or five papers that use this kind of like a principle. And sometimes actually, some papers actually even drop the number two. You just use one. And somehow still sometimes it works just because there's some other kind of counterbalance in a system that can achieve two without explicit encouragement. But let's not talk about those. Let's talk about one case where-- the simplest case. We have both one and two explicitly encoded in the loss function, which is called SIMCLR. And this is, I think, basically the first paper that makes this kind of idea work. So how do you exactly encode these two kind of principle? So in SIMCLR what people do is the following. So what you do is you say you'll first take a batch of example, a random batch of example, like in SVG. Let's call this example x1 up to xB. So you have B example. And then you first do some augmentations. So you augment to x hat 1 up to x hat B, and x tilde 1 up to x tilde B. As you can see, these two are the augmentations of the first example. These two are the augmentations of the second example. And these two are the augmentations of the B-th example. And then here is the loss function. So let me write down the loss function and then I can explain. So the intuition is that you want to design a loss function that basically maybe-- let's see, so-- oh, I don't have a different color today. So because these two are augmentations of the same example, you want them to have similar representations. And maybe the same thing happens for these two. So any pairs you should have similar representations. But if you look at this one and this one, suppose you pick something like this, then you want them to have different representations. And what you do is that you make the loss function. The loss function is equal to this. This is a little bit complicated at the first sight, but it's actually not that hard. So I guess-- so here the indices is the most important thing. So this is i. This is i. This is i. This is i. And here this is j. Sorry, my handwriting is a little bit unclear. So, OK, so maybe the first thing to realize is the following. So maybe let's focus on this term. Maybe lets call this term A and this term B. Realize that this is also A, so just the same term. So this is really something like a minus log, a over a plus b, something like this, abstractly speaking. And if you do some simple math, you'll see that this one is decreasing in A and increasing in B. This is relatively easy. I guess you can either take a derivative or just-- at least the increase in B is pretty easy because this function is-- this function is decreasing in B. And log is a monotone function. And you have a minus in front. And for A it's the reverse direction. So that means that if you minimize this loss of function, you are trying to encourage the term A to be smaller because it's decreasing A-- sorry, you want the term A bigger because the loss function is decreasing A. The bigger the A is, the smaller the loss function is. So you want this A term, which is this inner product between-- actually A term is exponential of the inner product. So you want A, which is equal to exponential, phi theta x hat i transpose phi theta x tilde i to be-- you want this to be big. You want this to be big. That means that you want this representation of the x hat i and the representation of the x tilde i to be as close to each other as possible. You want their inner product to be big. That fulfills our first goal where we want the representation of two examples, the representation of the augmentation of the same example, to be as close as possible. That's the first goal. And then you want this B term to be small, right? Because the function is increasing B, and you are minimizing the function, the bigger the function B is, the smaller-- sorry, the function is increasing B. And you want the function to be small. So you want the B to be also small. The smaller the B is, the smaller the function is. If you want these two to be small, and one of the terms in B, basically you have this kind of term. This is the terms in B, where you have i and j here. So basically you want to say that for different-- this is augmentation of the i-th example. This is augmentation of the j-th example. So their augmentations should be small. So their inner product should be small. So that means that the repetitions should be far away from each other. Any questions? Does [INAUDIBLE] also have [INAUDIBLE]?? Exponentially is increasing function. If you want the exponential of something to be small, then it means you want argument of the exponential to be also small. [INAUDIBLE] The first one is i and the second is j. So j only shows up once. All the others are i. So there are other interpretations of this loss function. I'm not sure whether they are easier to understand or harder to understand than this. One interpretation of this loss function is that you can view these as a multi class classification kind of question. So you are trying to distinguish-- so basically, in some sense, you can-- supposedly do a lot of math, you can see that this is the same laws as the following hypothetical question. So the question is, given this dataset, I want to, say, given for example x hat i, the first augmentation of the i-th example, I want to distinguish which one is the positive pair and which one is negative pair. So, basically, let's say, suppose you are-- this is another interpretation, which I personally think it's harder to understand. But let me just anyway say it. So suppose you have some example x hat i. And you have the corresponding x tilde i. So you can kind of view this as, you want to-- given this x hat i, you want to test which one of these B examples are the most correlated with i in some sense. And you want the x hat i, this one, to stand out in some sense compared to the other correlation. You want the correlation between these two to be dominating all the others so that you can kind of say this one is my buddy, in some sense. It's the other example in the pair. Yeah, so but anyway, the exact form of the loss function is not that important. They are actually other ways to implement this. It's not like it's really necessary to have to use this loss function. The principle is probably more important than the loss function. So, cool-- so I guess just to summarize, this is a loss function that only depends on x. So you didn't use anything about label. So it's unsupervised or self supervised loss function. So it's self supervising because-- self supervise just means you don't use any supervision except some part of your self. OK, any other questions? [INAUDIBLE] you'll find in this case [INAUDIBLE],, like classify which image it is? No, no, phi is still the same phi. The phi is just the feature, the representations-- the function that computes the representation is the feature extractor, or something like that. Does that make sense? So phi [INAUDIBLE] That's why I have the transpose. OK, so next thing I'm going to talk about is how do you do pretraining when you have language. And I'm going I'm going to tell you one method, which is also a self supervised or unsupervised pretraining, but the method is a little bit different. [INAUDIBLE] So after training the model, we basically discard the very last layer and use all to remaining available to perform all the possible image classifications. No, there is no loss layer anymore here, right? This phi of theta is a neural network, rihgt? This phi of theta is like some neural network start from x, a lot of neurons. We draw a lot of edges, something like this. And, eventually, you output some embeddings. This is phi of theta. And that's it. That's your representation. So you don't have to discard losses. But you have to add loss layer when you do the classification task. Does that make sense? This drawing is a little bit too [INAUDIBLE],, OK, cool. So large language models-- so the first thing I need to address is that, how do I encode the data? So I have some texts. They are discrete words. I need to first turn them into numerical numbers. I guess if you remember, I think a few weeks back, so we talk about this event model, this model that you include data by, this very simple binary vector, every document is encoded as a 0, 1 vector of some steps. So those are very simple. So today we're going to do the more realistic one. So you just-- but the encoding becomes easier actually, just because what you do is you just directly encode each vector, each word, as just the discrete token. So let me just say this. So I'm saying that this is a way to encode, but the encoding is, to some extent, conceptually much simpler. So, basically, the first thing is that, let's define what is an example. We have a language. So, typically, I think for me, the best way to think about the example in language is that you think of it as a document or something like that, a sequence of words because you cannot view every word as example, then you lose the-- examples are supposed to be somewhat independent with each other. So if you use each word as an example, that's too small for granularity. So you view each document as example. So that's the kind of mental picture I think of. So you have a sequence of words, maybe x1 up to xT. And this is one example. In reality, what people do is that people don't really just literally look at which-- the documents because what you are given is that you are given, for example, all the Wikipedia text on Wikipedia. And this is a gigantic file which is really just a sequence of words. And everything is concatenated together. And then what people do is that you just truncate, you just take a consecutive sequence of words as one example. Maybe you take something like maybe 500 words, consecutive 500 words, or maybe But in any case, I think if you don't have all the details, implementation details, it's just fine to think of each example as a document. And important thing is the sequence of words. And there's another small detail in the implementation. So when you really do this, sometimes you don't really operate on a granularity of words. For example, one possible choices can operate on the granularity of characters. You do each character as one xi. People don't really do that. What people do is that people are operating on the level called tokens. And each token-- typically these days in the best model, each token is kind of like a word, but sometimes its smaller than a word. So basically you can think of most of the common tokens-- or sorry, most of the common words are a single token. The top-- I think last time I checked this it's like the first 20k frequent words are all just a single token by themselves. But sometimes you have longer words that just never show up very often. And then you break them in some way into two tokens. So a very, very long word might be two tokens. This is just another small detail, just in case you are implementing anything like this. But for conceptually, you just think of each word as a single xi here. And you have a sequence of words, key words here. That's my example. And then what you do is that you say-- and let's say, suppose you have a vocabulary. So each word is in some set 1 to V. So you have V possible words. And each token, each example is a sequence of words. And when you say language model, basically people always refers to a probabilistic model-- probabilistic-- for p of the joint probability. In some sense, this is the same kind of modeling methodology as what we do with mixture of Gaussians, right? You are modeling the joint probability of your data, of your x. But if you just directly model the probability, this joint probability is very difficult because this one has support. This one has support size, its distribution, right? So like how many possible sentences you can have here? You can have-- support size is something like V to the t because each word can have V choices and you have t words. So this is really a exponentially large family of possible sentences. And if you model this distribution it's kind of challenging. So what people do is that people do is use chain rule. So you say this joint probability can be written as p of x1 times p of x2 given x1 times p of x3 times x1, x2, up to p of xt, given x1 up to xt minus 1. And then you model-- each of these p xt, little t, given x1 up to xt minus 1. You this conditional probability, this conditional probability, using some parametric form. And then you learn the parameters. And the good thing about this conditional probability is that now you only care about the one, the probability of one word. The probability of that one word is of size v, the support of this probability. So the number of choices of words here is v, so instead of v to the power of t. So the next question is, how do you model this? How do you build a parametric form for this conditional probability? So I'm not going to tell you exactly how you do it. But generally you just do it with a neural network. I'm going to tell you, there's some kind of details I'm omitting here. But roughly speaking, what you do is the following. So, first, the first thing, you have to do embedding for the words, meaning for every word x, you embed this for every word i-- this is a word i-- you embed this word into a vector ei. And this vector ei, let's say, is in dimension d. So every word will correspond to a vector. So you're going to have v vectors. Each vector corresponds to a word. And these vectors will be learned. So these are parameters of our system. So these are the parameters. And then after I have these parameters, what I'm going to do is I'm going to-- after I have these embeddings, I'm going to put these embeddings into a gigantic neural network and let the neural network to output on the conditional probability in some sense. So, basically, roughly speaking here, you're going to have some model, which I view as a black box. These days people call this-- use something called transformer. I'm not going to tell you exactly what transformer does because it's actually pretty complicated and it's out of the scope of this course. You can take a look at a paper or take some other advanced courses. But for the first-order concept, this is a black box. Actually, many, many people don't have to-- many people who use the transformer don't have to open up the black box. So that's why I'm only telling you what's the input and output of this black box. But this is a neural network. This is a complex neural network. And the way to use this black box is the following. So you just say, I have some sequence of words, x1 up to xt. I'm going to first encode them by the word embedding. I'm going to have e of x1 up to e of xt, so e sub xt. So these are vectors. And this transformer takes in the sequence of vectors and then output you a sequence of vectors. So the output, let's call them c1, I think I call it c2, c3, up to Ct plus 1. And let's also just give a name for this transformer. Let's call this function phi of theta. So this function phi of theta, after given all of the input embeddings, it outputs a sequence of vectors. Each of these vector, let's say, is still of dimension d, even though this dimension d, in reality this d might be the different d from this. But they are vectors. OK? So after you get these vectors, then you can use these vectors to compute the conditional probability. You use these vectors to predict conditional probability. So, basically, you say that I have this-- I'll use this, I guess. So now after you get these vectors c1 up to ct, so I'm going to use ct to predict p of xt given x1 up to xt minus 1. So what I do here is the following. So what I want to predict is this thing, right? So this is actual, this probability distribution is, sometimes it's a vector. It's a vector of dimension v because to describe this probability distribution, you have to describe p of xt is equal to 1, the first word, given x1 up to xt minus 1, up to p of xt is equal to V given an x1 up to xt minus 1. So to model this probability, you have to model, predict V numbers. And this V numbers is supposed to sum up to 1. And how do you do this? This is kind of like a multi class classification. So what you do is basically you say, I'm going to have a softmax of some Wt times the vector ct. So let me specify here. So ct is a vector of dimension d. And Wt is additional parameter. This is a parameter that you will learn. So this Wt is of dimension v by d. So, basically, Wt time Ct will be dimension v. So you have multiple-- so for every possible choice, we get a vector. And they apply a softmax to make them probability. So this whole thing will be in dimension v. So, in some sense, these Wt times ct is just the logits. And then you apply a softmax to turn them into probability. And that's your prediction for this problem. So in some sense, you just view each word as a class. You have v classes. And how do you predict something with-- how do you do a classification with v classes? What you do is use first use a linear. This is a linear hat on top of the ct. And then you do a softmax to turn them into probability. So I think the definition of softmax, I think we probably define this in one of the early lectures. So let me just define it again, so-- I should use this, probably. So softmax is just like, if you have softmax of vector u is really something like this, exponential u1 over sum of exponential ui, and then exponential u sub v over sum of exponential ui-- something like this. So you turn the logits u into a probability vector which sum up to 1-- any questions.? [INAUDIBLE] So I'm going to use c2 to predict x2. I'm going to c3. You can index I mean, I choose this is to indicate that c2 is used to predict x2, given x1. And actually there's one thing I didn't tell you which is actually important. So when you predict this probability, this probability-- this is my model to predict the probability. I need to insist that I haven't seen other words. I have to insist that I only have seen x1 up to xt minus 1. If I have seen xt already, I can just output the true value of xt I've seen. So actually this transformer, there are multiple versions of transformer. The transformer here I'm talking about, the term is called ultra regressive transformer. This is just a name. What I mean is that you design this transformer in a way so that ct only depends on x1 up to xt minus 1. So you designed this architecture such that you have this property, such that the ct only depends on x1 up to xt minus you are not going to be able to see any other words after xt. So that's why you have a proper definition of the probabilistic model. But this is like-- just think of this as given. The transformer's internal property ensures you to have this. How do you connect all of the neurons [INAUDIBLE] this property. [INAUDIBLE] So in many of the dimensions-- I call it d right now. But the dimension is, it's just a parameter you can change. You can change it to anything. Of course, it has to be somewhat large, something like maybe I'm trying to understand, how does this relate to what we learned about the one-hot vector? Do we have to create that many [INAUDIBLE]?? That's a good question. So if you use one hot vector, probably you should use embedding of dimension v because you have v choices. But now I think the embedding dimension is often smaller than the vocabulary size. I think the vocabulary people typically use is something about something on that level. But the dimension, the dimension is probably orthogonal to it. [INAUDIBLE] Can you say it again? [INAUDIBLE] something like like a hash function? You mean the mapping from the i to the ei? It's not-- it's learned. So basically you just learn this. So you have e1 up to ev. Each word has a vector. And you concatenate all of them as a matrix. And you view these as a parameter of your-- this is a part of the parameters of your training. So now the final step is just that after you already defined the prediction, you have to define a loss function to learn the parameters. The parameters are-- the question was great. The parameters include e1 up to ev, also include wt. And that's it-- Also the parameters in the transformer, which I didn't specify. That's a neural network which is viewed as a black box. And now you learned, you take a loss function which contains all the W'a, but which is a function of W's, the theta and the ei's the e, the e. And then and what this loss function is just the cross entropy loss of all the positions-- so cross entropy loss of all positions, of position i. And if you really think about what's the cross entropy loss for the softmax, for these kind of things, it's really just this. But you don't necessarily have to really exactly see why it's this. But it's really just the minus log probability of pt and xt so if you call this thing pt. Suppose we call this pt, then basically the xt-th entry minus log p xt entry is-- this is the xt-th entry of that vector pt. And this is just the cross entropy loss. But don't worry if you don't get this line. It's really just-- empirically you just have to-- when you implement this, you just have to call the cross entropy loss and give it to it. We're running a little bit late-- any questions? So actually there's another small thing I didn't-- I cheated a little bit. So here I'm taking-- so this definition, I can only define h for t is 2 in some sense. I don't have a prediction model for x1. I only have the prediction for x2 given an x1, x3 given x1, x2. I didn't have a probabilistic model for x1. In practice, people just forgot about it. Just don't use it. It doesn't matter that much. You can try to fix it to make it principled, but I don't think it matters that much anyway. So just because you have so many probabilities to model, and if you ignore one of them, it's not a big deal. So now let me talk about how to adapt, how to adapt this language model to downstream tasks. So I've erased the fine tuning and linear probe. You can still do those. Those are very general. You can use those fine tuning and linear probe for almost anything. And for this kind of language model, the way you do fine tuning your probe is just the only thing you have to decide is that which output you should use as the representation of these documents. So you have so many outputs here. And which one you take as the representation of this sentence? So you just take c-- one option is you take ct as the representation. And then you add some hat w and you w transpose. This is your prediction model for downstream tasks. And then you can choose to only fine tune w, or you can choose to find you w and the parameters that you use to compute ct. The parameters used to compute ct are those parameters in the transformers, those ei's, those embeddings. So that's easy just because it's generic. And for language models, the interesting thing is that you can also do this with some other ways, where you can do adaptation with other ways. So one way to do this is so-called zero shot learning. And here it's just very easy. So basically, whatever task you have, you just turn it into some questions or some closed test, like you have a blank to fill in. And then you just give it to the model, and then the model to generate the next word. So, basically, what I'm saying is that you just say, for example, suppose you just turn-- you can have a x, which is just-- this model can take the sequence for it. You just say, maybe, is the speed of light-- suppose you care about the speed of light is a constant or not. And you just turn it into a question. You call this, this is x1, x2, x3 up to xt. And then, because this model can generate-- maybe let's just say this is xt minus 1. And then you use this model to generate xt because this model can do conditional generation. Given x1 up to xt minus And given x1 up to xt, you can generate xt plus 1. You can just keep generating the tokens afterwards. And you just put this into the model and let it generate. And if you generate the next word is no or yes, then you get the answer. And maybe sometimes it generates something slightly different from just yes or no, then you have to pass the answer in some way. But you just let the model to generate the answer. That's one way. And so basically you are using the generation power of this model. That's probably the important thing, right, because giving this model-- so basically here the important thing is that, the way you have this model makes it that given some sequence of words, you can generate the next token. And then you can feedback this new token to the system to generate another one. And you can keep generate. So that's why this gives us opportunities to have other adaptation method which based on generation. And another even more intriguing way to adapt is the following, is called in-context learning. So the in-context learning, so here you are dealing with the so-called few shot cases. So you have a few example. So suppose you-- and let me just give an example to-- there are so many flexibilities. I'll just show one example. So what you do is you concatenate all your examples into a so-called prompt-- concatenate examples into a, in some sense into-- actually, in the language of this lecture, you call it document. But if you look at the paper, it's called prompt. So what I mean, it just means you concatenate all of this into a sequence. For example, suppose you care about learning how to add up two numbers. Suppose you have a question Q, which is something-- And then you do have-- you have some examples, right? You know that this is 5. So this is your x task 1. And this is your y task 1. So you just concatenate them into a sequence of tokens, so like this. And then you just keep concatenating. And you say Q 6 plus 7-- no, plus. I choose to not use plus because I want to make it difficult, so something like this. You are trying to learn what this symbol means. And then this is x task 2 and y. And then you concatenate y task 2 here. So you say answer is 13. And now suppose you just have two examples. And you want to learn. Then now your question is 15, this symbol, 2, is equals to what? So you concatenate all of this together. And you call this x1 up to xt. And you give this x1 up to xt the sequence of all the symbols to the model and let it generate. And then you ask this sequence to generate xt plus 1, so use these to generate xt plus 1, or maybe even xt plus 2, so and so forth. And it turns out that if you give these things to the model, and they will generate something reasonable for you. So we do generate something. For example, for this case, it will generate A, 17. And then you got answer. The answer is 17. And you see that so this in some sense says that you learn the downstream tasks, you'll learn that the symbol means addition from this. I'm not sure whether this is a little bit abstract. But I think what I can do is the following. So I think this is about time. And we can stop-- we can stop the class. But I can show you some examples, just live. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_I_Weighted_Least_Squares_Logistic_regression_Newtons_Method_I_2022_I_Lecture_3.txt | Hello. So welcome to lecture 3. This is going to be about classification and regression. This moves us from our first task, which we were doing last time, which was this regression how we fit a line into a task that will look really, really similar at first, but we'll have a couple of subtle differences. And we'll go through that, which is classification. Remember we talked about classification was for discrete objects like, that what's the animal in this photo, is it a cat, a pig, a horse, something like that. Those are the types of problems that are pretty prevalent in machine learning. And so we'll talk through those basic issues today. What we're going to do, though, is we're first going to start with this probabilistic view of linear regression. And the reason we're going to do this is we're going to walk through again in that setting, where it's hopefully relatively familiar what's going on, how we give a generative model, is the term for it, for this underlying kind of optimization class for linear regression. So we're going to interpret it probabilistically. And that interpretation, we're going to be able to use for classification, and then again on Wednesday, for a much richer class of models, which are all these exponential family models. And I will assert at a high level to try and keep in your head. These things all look the same. We're trying to get to this, an abstraction that lets us solve them, do inference with them, and reason about them in a similar way. So this is one key building block. So we'll start with that probabilistic view of linear regression. Then we're going to talk about classification. And at first blush, classification is going to look just like something we could solve with linear regression. So I just want to make sure it's really clear in your mind when we use classification and when we use linear regression and kind of what the little problems are or challenges are as you go through actually doing that. Then we're going to introduce the workhorse of machine learning, logistic regression. This is something that I probably use every day in some form or another. It has different names in deep learning now and the way people use it, sometimes called like the linear layer and softmax. If you're a deep learning aficionado, don't worry about it. But this is kind of the standard workhorse. And we can say a ton about how logistic regression works. And it was not invented by machine learning people. This is an old and classical algorithm that our statistical friends invented. We use it in slightly different ways than maybe they originally intended, and I can get into those in a little bit. Now, as I mentioned, there's going to be parallel structures. So we're going to talk about logistic regression. We're going to talk about why not logistic regression-- I'm sorry-- linear regression, and we'll talk about logistic regression, which is confusingly enough a classification algorithm, although it says regression in there. Don't blame us. Blame the stats people. And then once we do that, we're going to parallel exactly as we did in the last lecture and talk about how to solve it. OK? And when we solve it, you're going to be introduced to a method called Newton's method, which maybe you've seen if you took a kind of a stats course at some point, or calculus course. And we'll reintroduce a way to solve it. And Newton's method, when it's applicable, is really, really fast, meaning it converges very quickly. But each step of that algorithm, we'll see is quite expensive. And so it's not really appropriate for a lot of the places that machine learning people care to use it. OK? So the message is from this lecture if you get is the what is classification, why does it differ from regression, what is the workhorse model in logistic regression, and then a method to solve it. And then we're going to come back at the end and kind of compare and contrast the different ways that people solve these different things, all right? As last time, there is a thread that started online, if you feel more comfortable asking your questions anonymously. Last time, we had a bunch of great questions. I'd love to keep that going. Super happy to talk with you about anything that's there. But I will keep the lecture if you want. OK. Awesome. Let's get started. All right. So we're going to start again with our friendly squares, right? So just to give you the format, you should think about these tasks, you're given something, and your goal is to do something with it. So we're given some xi and yi, for i equals 1 to n. And recall, this is the training set, right? In this case, xi lives in some Rd or Rd plus 1 as our convention is, d plus 1 recall, because we had that convention that there was a bias term, where every single entry had a 1 appended to it if you remember that from last time. If not, don't worry. Just remember like, why is there d plus 1 there? It's by convention. And we had a target variable y, and this target variable, which I'll highlight in purple, this target variable was a real valued number, right? So this is-- picture a line for the moment, OK? And our goal was that we wanted to find some theta element of also Rd plus the argmin, or very, very close to it because remember, argmin, we can't really solve these exactly, even though we like it, over theta of sum i equals 1 to n, yi minus h of theta x of i squared, where h of theta x of i equals theta dot x. Actually, I'm just going to remove the xi. I don't care. It's true for any example. OK. I'll put transpose here just to make sure it's clear. I have a dot there. But awesome, OK? Now, I'm using this slightly more general notation, this h of theta. And that looks like a little bit of overkill for a dot product, but we're going to use that in several ways through the next couple of lectures. So apologies for that. If you remember here, we had this. The way we define this theta is we said, oh, it minimizes the losses or the residuals squared. And we didn't give any justification for this. And what we want to do is in this next part go kind of one level deeper and ask this why question. Why did we pick to minimize the sum of squares? Now, this will introduce us to one of our favorite friends in this course, which is the Gaussian distribution, and we'll talk about why that's a plausible thing to do, right? You could ask why the Gaussian distribution, and I can wax with you philosophical. I'll give you a sense of it, but we'll come back to that in one second. So our goal is, OK, we have this equation, but where did it come from? And by thinking about where does it come from, that's going to tell us how to generalize it, right? That's the plan for what we're up to. All right. So this is our first model in the class, really like generative model. We're going to assume that yi looks as follows, it's going to equal theta t xi. I'm going to unpack this in one second plus some epsilon i. And this character here, this is an error or a noise term. All right, so let's unpack this, because the first time we're seeing one of these things. OK. So this thing here, maybe this makes sense to you. You're like, oh, there exists some true theta that's out there. That's what this is saying. This model is saying. There's some theta, I'm going to call theta star to make sure it's clear. There's some theta star that's out there, one that's hidden from us. But all of my data was generated by the taking that parameter with the features and generating the y. OK? This would be a situation where all of your data laid perfectly on a line, right? Because it's just saying that given the features, I know exactly what y's value is. Now what we're saying is something that's not quite that strong. That would be noiseless situation. We're adding in a little bit of noise. And when machine learners or statisticians say noise, what they mean is stuff we can't really explain. That's the kind of thing to think about it. We're modeling it up to this. OK? And maybe we know a little bit about the noise, and we'll talk about what we might expect from noise in a second. But like we expect that there's some kind of random gyration, maybe this accounts for in kind of physical setting, some measurement error, right? Some classical jitter that's underneath the covers. Maybe there's something where if you're more kind of sophisticated Bayesian, you think it's subject to your information. I only know the features x, and there's some unmodified piece that's in the noise. And I'm willing to kind of minimize that. OK? So what are the properties you would want of this error, OK, right? So this is a forward model by the way. It tells me, I don't know theta, but I know how my data is generated, right? That's what I mean when I say generative models. If you give me an Xi and presumably, knew how to construct epsilon, you could get a yi value, OK? Now we don't get to see epsilon i by the way. It doesn't appear in the training set. It's just a mental model for how the data are actually linked together. And that's going to be fairly important when we start to generalize these models, right? They have certain kinds of errors we can characterize. OK? So what are the properties that we would expect of epsilon i? So first notice that it's epsilon super i is that noise is different per tuple. It's not like there's some noise offset that was just added to all our data and shifted. Every single point if you like, we're going to have a random model, you get a random sample from that noise. OK? And that's what determines what the yi is. OK? All right. So what would you expect from that? Well it's random. We probably want something where the expected value over all the epsilon i's over that random process is 0. This is sometimes mean it's unbiased. OK? Now this is on one hand like a deep philosophical statement, and other hand kind of a trivial statement. The deep philosophical statement is we're kind of saying that these errors if you think about it is kind of unreasonable like if I average over infinitely many of them, they're not going to appreciably change what the true y value is, right? They don't have any information that's inside the model. That's really what it's saying. I may still get a sample where epsilon i is 0.1 or 0.2 or negative But on average, I'm just making a statement in a population like I don't care about this value. It's going to be averaged away to 0 in a precise sense. Now on the other hand, if it were not a zero value, that would be kind of a strange thing because we have a bias term. Suggests that you could actually just kind of incorporate that standing bias in the theta star. OK? So I don't want to go too far down that path. But like operationally, this is not an unreasonable thing to say, and I want you to think about it as kind of a statement of information like I modeled all the features in the X's, right? Those are all my features, the house price, remember? The lot size, all that stuff, and there's something I haven't modeled. Maybe the house looks a little bit nicer. Maybe it reminds people of where they grew up as children. Maybe it was like a famous house or something that probably doesn't count for noise. But there's something about it that's unmodeled as a statement of information. The second thing is a little bit more subtle. The second thing is that the errors are independent. OK. Now we don't always make this, but this is going to allow us to do some pretty healthy mathematics. And let's talk about what would happen if it failed. And what this means formally is this I'm going to write down a strong form, OK? It says that these two things equal the expected value of epsilon i, and I mean it in a very strong sense. If you know about various different notions of independence and uncorrelation, don't worry, OK? All right. So this is for i not equal to j. That's what that little piece is here. Maybe I'll write it bigger for i not equal to j. OK, what does this mean, OK? So if you remember your notion of independence, what we're saying here is that they're statistically independent. And I mean in a really strong sense like knowing the error for one tuple doesn't tell me anything about the error for another tuple. This is consistent with my earlier kind of interpretation of errors like how much information I know about it. If I didn't know something about that error and I could model it, then I'd have to have kind of a more complicated model here. All right. Any questions about this so far? Awesome. OK. Now with this setup, there's one other thing. So so far, I made two assumptions. At this stage, I hope you think they're like plausible to make progress. And that's one of the things by the way about machine learning that I think people kind of get uncomfortable about. I certainly was when I first started and kind of statistical modeling that like you're like well is that really true. Kind of not the right question to ask. It's like is it a useful assumption to make progress like, what am I giving up by making it, which is a much harder thing to assess. It's not ever true like. If you look at real errors, they're very infrequently Gaussian distributed, right? That's kind of terrifying, why do we use it everywhere? Well it still works pretty well because we're not assuming too much about the underlying data. Now if you know something about that data, we'll come back in the next lecture and tell you how to put more information about the error. But I just want you to get a sense like these are kind of strange modeling constructs. Now one thing that we'll care about, which is a function of this is how noisy is it. We need a measure of noise. And so a natural thing to assume is that-- and we can relax this assumption. Let's imagine that they're all kind of a uniform background of noise, so everyone has the same variance. This is a standard variance assumption. The sigma squared, so there's noise. We don't know anything about it. We know it's unbiased. And we know that it has about the same magnitude. It's not wildly different on some piece of our data versus others. And again, it's like a statement of, if you want to be really philosophical like an ignorance prior. We don't know anything, so how would we know that this part of our data has more noise than the other. It's just an assumption to make progress. Please. Why does [INAUDIBLE] equal to the expected value of that square? Oh, yeah. So this the variance here, the variance formally is epsilon squared minus the mu, so normally when you have a variance. But I've elided that because this first term is 0. So it is actually the variance. I just wrote it as the square there. So it is epsilon minus the mu, the mu is just 0. Wonderful question. Thank you so much for that. Please. [INAUDIBLE] So the error here is a sample from a value, so it's actually a discrete value that's underneath the covers. So the way to picture it is like imagine-- I don't know. I hate invoking deities, but imagine there's god, and she's got her table, right? And then he got the value of all the x's and she has her theta 0 and she produces a y, then for whatever reason, she adds a little error to it. That error is a specific scalar that changed y from 0.2 to 0.4. And I'm just saying that's the piece that we want to model. And the reason we want to model that is because what we're coming to-- so does that make sense, the types check, it's a scalar, not a function there? The reason we want to get to that is we're going to solve this problem up to noise, and we're going to worry a lot in this theory, we won't do it too much in this lecture, but we worry a lot in the theory of could we solve it up to the noise floor? If you're making decisions that depend on how noisy it is and your data has a sigma squared of 1 and you're trying to make decisions where your values are like 0.1 apart, intuitively, something should be wrong there. You're reading into the noise. So we'll worry a lot about how our procedure scale with the noise. So you can think about this as saying like, there's some average noise, and it's noisy or not. If it were 0, then our data is perfectly clean. That's the way to think about it for now. And if my explanations about scaling and other stuff seem like obtuse and weird, please don't worry about it. We'll come back to our picture of the Gaussian, and it will be hopefully a little bit clearer in a second. Other questions? All right. Now, here's the remarkable thing. Here's how the Gaussian comes up. It comes up for a really interesting reason, which is, if I tell you that I want a distribution such that it's unbiased and it has this sigma squared, I know its variance, and I assume nothing else about it, the Bayesian used to make a lot of noise about this, then that distribution is uniquely the Gaussian. So you don't have to know that in some fundamental sense, but it leads you to this conclusion. That is a distribution that has these two properties and you're not assuming things about how it's third and fourth moments, like if you wrote a 3 or 4 there. I don't have to assume what they are, right? They're just given. So let's see. If I can skip to it. So it turns out this is the unique distribution. This is unique in some sense. That doesn't really matter too much. That's more like philosophical of the above. And I've been a little bit too imprecise to really appreciate that but that you should kind of take away. OK? So this is our friend, the Gaussian. Let's go through the notation first of what we mean. So this epsilon i here, getting to the great question earlier, what it says is epsilon i is drawn. This is what the total means, drawn, distributed to n of mu sigma squared. And I'm going to draw that in a second. This is a normal distribution, means this is the mean. This is the mean of that normal distribution and this is the variance. OK? And here's a picture of it. We'll get to it in one second. This is the mean right here and this is how the distribution looks. And so epsilon, the way it's picked is, I will sample with probability proportional to the height of this curve right here, may I pick a value here, and that's how I get the epsilon i. And I'm repeatedly drawing from this underlying distribution. OK? That's the mental model I have of epsilon. Now, is that actually what's going on? I don't know. Don't know god. Don't know how that works. But it's our model of how the world is going. OK? Sound good? All right. Now, as we go through, there's a couple of things. This distribution is actually fairly peaked. So maybe you've seen this like a central limit theorem or something before in earlier classes at some point. You've seen this idea that if I take a bunch of things and add them together, they kind of converge to this distribution. I won't make that statement precise. But there's a reason this thing kind of comes up. If you have a bunch of additive errors, they end up looking, when you average over all of them, they end up looking Gaussian. OK? So if you have a lot of little tiny additive errors, they end up looking Gaussian. There's too much philosophy for why this thing shows up. If none of that matters to you, don't worry. It shows up and we're going to use it. All right. So just to make sure you understand this function and what it looks like, here's the mean value of it. If you see the sigma version, so this is the square root of that sigma squared, you see that within one sigma, you have here 63% or 69% of the mass. By the way, these notes, you can go download the templates and they will have those things and this picture is from Wikipedia. So you can also just look there. Please. So technically, are we always going to assume population structure in this class? We will do a lot with population statistics. We will not do things with sampling until a couple of key points. And the difference between those two will be immaterial when we do the actual solves. But it's a great question. Yeah. And if that doesn't make sense to you, don't worry. Awesome. Is mu 0 there? Yeah. So in our setting before when we had epsilon i, that's exactly right. I should have written this. For us, great call. This should be 0 for our epsilon i. In general, this is the notation, so mu is equal to 0. Mu equals 0. Great point. Awesome. All right. Another thing is, this is the function right here that we're looking at. Now, when you look at this, you start to see why least squares may come up. There's this quadratic looking thing in here. OK? And there's a 1/2 and there's a sigma squared and whatever. This is the normalizing constant. That's just if I integrate the entire area under this curve, this function. That just makes sure that it's 1. That's a PDF. That's all that thing is. You'll see it a bunch of times in various different guises here. OK? But this is the function. OK? Now, let me unpack this notation for you. You may not have seen this before. This is not conditional. We'll come back to this. I'll hammer on this a couple of times. This is the probability distribution density of z and then the semicolon is not conditional. Formally, it means these are the parameters of the distribution. You don't condition on the parameters. This may be a little bit pedantic, but we will stick to this in the class. You have mu and sigma squared. Those are like things you plug in, right? So why does this matter? When we reason about the normal distribution, we're going to have these parameters, the mean and the sigma squared. We just plug them in and then it gives us a distribution. And that's going to come back in one second when we get to what's called a likelihood function. So these are our parameters. Let me write that down here. These are our parameters. Is the notation clear? If you're familiar with conditioning, this is not conditioning. You can still condition. You can write it in a bar, and we'll do that in one or two steps. You should be familiar with conditional probability for this class. Not the most advanced versions of it, but basically what it does. OK. Awesome. All right. So now, let's write something that's conditional. So what is the probability of yi given xi? And as we said, I'll write here theta, I'll write here theta, which is our underlying parameter. Well, it's going to equal what? that's the normalizing bit. I'm going to move this down. Sorry. X of, and then here, it's yi minus theta xi squared over 2 sigma squared. OK. So far so good. So this is the probability distribution. What does this say? It says, given that I saw xi, I saw the feature, what is the probability distribution over the yi? What value should I expect, OK, to come out of this, right? That, I have this theta model, which is our parameter here, OK, so our parameter. OK? You put sigma squared in there, too. Now, we'll write this in a more compact form. xi, this is the bar. So this is now the conditional probability. It says, given that I saw xi, a condition on all the probability distributions under theta, this is the probability distribution over yi, and I'll often write that this conditional distribution is n of theta tx i sigma squared. So this mouthful is the same as this mouthful. Does that make sense? This is just notation at this point. And hopefully, the fact that it's a generative model kind of adds up. Go ahead. OK. What will [INAUDIBLE] the distribution of the [INAUDIBLE] or the distribution of y? Yeah. So great question. So here, what I'm saying is, I'm basically asserting by Fiat that because the only random variable is epsilon i, that xi here-- so this really is, the difference between these characters right here is epsilon i, right? That was by Fiat in the model when I did it earlier, right? Oops, sorry to scroll. I really wish it didn't look so nauseating, but it does. So because of this model, I could substitute in here. So this value here is nothing more than epsilon super i in just a different guise, which tells me all these pieces. But it has to be conditioned on xi because I saw that. I saw that variable when I had to add it in, right? So that's how I get a distribution over yi. Wonderful questions. Are there more? Yeah. Oh yeah, so someone asked on the thing, is ei the product of the two? And the answer is yes. Yeah, so thanks for the question. Someone is asking if this is the product up here? And yes, this is the product. These are just multiplied by one another. There's no hidden operation there. Yeah. Sorry, the graph paper makes sure that I write in a line, otherwise like I'll end up writing all crazy, but I can understand it's not awesome for rendering. Other questions? OK. So why did we do all this? Maybe I just like torturing you with notation. The truth is I really don't like notation, but we're going to use this in several different ways. And so hopefully right now, you can piece it together and say like, OK, I kind of could see a model underneath here. And what we're going to do is we're going to try and justify the optimization that we did for least squares by picking the most likely parameter. So let me explain what we mean. So before I do that, notice here one fact that I've hidden from you a little bit. Picking theta picks a distribution. Let me make sure that claim is clear before I move on. OK. What do I mean? Once you tell me theta and I have the data fixed, then all the distribution over the yi's are fixed. Does that make sense? So in some sense, by picking a theta, once I have the sigma squared fixed, I'm picking a distribution now over what the yi should be. And that's going to be interesting because what it means is as you pick a different theta, I can compare how well does it line up with my data. So intuitively, right, if I pick a theta, all my data lies on a line, and I pick the theta that exactly fits the line, that should be much more likely than if I pick a different theta, where it's scattered, my predictions are scattered all over the place. And they're really far away, so at the thin parts of this. So let's come down and write that a little bit more precisely. But that's the intuition of what's going on here. So ask me a question about that. So for this, we need a notion which will be very much used in this class, which is the notion of likelihoods. And this allows us to pick among many distributions. OK? So at first, that sounds pretty fancy, like how are we going to pick among these distributions. It's a huge unmeasurable class, if you know what that is, all this nasty stuff. But we just have to pick in our situation among the different thetas that could fit our data. OK? And we're going to pick the one that is most likely. So let's write that down right now. So what is the likelihood of theta? It's going to be the probability of all the y's given all the x's given theta or with input theta. This just says how likely the data is. And clearly, as I vary theta, I'm going to get different scores here for how probabilistically likely all the y's are. Let's break it apart. If it doesn't make perfect sense what that statement means, when I start to write it out mechanically, hopefully, you'll see how it decomposes, and then please ask me a question. Now, I'm going to write something which at first may look, actually right here, a bit unmotivated. I can break this down into many smaller assumptions or many smaller pieces. Why is this the case? Why can I take the big thing and turn it into a product of the small things? What am I using? Independence. Exactly. I'm using independence and the strong form of independence, which I kept bringing up. That told me that I could write this big product over all the vectors as this product among all of them. And sometimes you'll hear this referred to as the iid assumption, independent and identically distributed. All right. Cool. Please. [INAUDIBLE] Yeah. So theta, there should be a 0 and a sigma squared. I'm being a little bit glib about what happens with the-- I could imagine a model-- this is a wonderful question. Thank you for letting me say this. I could imagine a model, where because of the way I specified the model, it's implicit that mu is always 0. But I haven't told you what sigma squared is. For right now, imagine that sigma squared is fixed. I told it to you ahead of time, so I don't have to plug it in here. I could also fit it, right? I could look at my data and see among all the thetas that are there and all the noise levels what's the most likely one. And that's actually a slightly different model. But here, I'm imagining that sigma squared is fixed, but it should go under this rubric. And you're like, why are you being so sloppy about that? And the reason is because later, we're going to be much sloppier about it because we're going to introduce notation that says it's all the parameters in the problem. But wonderful question. You're exactly on target for this. Oh, please. [INAUDIBLE] No, no. So here, we have this probability. But because it's conditioned on x, we've removed all the dependence on the data, so everyone gets to see all the data. And now all that's left that's unspecified in the model is the epsilon i's. If they were all 0's, you'd be able to get the yi's exactly. But the only randomness that's left is that epsilon i. That's what we're doing and that's what we kind of cheated on here when we said this guy is really epsilon i as well. Wonderful questions. You folks are really on top of this. Any other questions? Please. Yeah. Can you [INAUDIBLE]? No, no. So there's really not much to say here. All that there is we went through this model, where we said yi's are of this form. Now that we've conditioned on xi, we know this, and we're plugging in theta. So you're giving me a particular theta to evaluate. And now epsilon i is a random variable. It has yet to be determined. So there's a distribution over that. The distribution of epsilon i is given by this equation because this is exactly equal to epsilon i. And so this now gives me and say, I don't know what epsilon is, but it has a distribution that looks like this. So if I sampled it, that's a weird statement. I want to be clear, that's a really weird statement. It means that if I picked enough epsilon i's, I'd expect its mean to be here, whatever mu was, in this case 0, and I'd expect the scatterplot to look like it was inside here or actually the histogram to look like this, if I binned how many were in each thing and eventually converging to this distribution. That's exactly what I mean. Awesome questions. These are great. Oops. All right. Oh, man, I even had it down here written nicely. Sorry. We'll go back to my messy version. So here we've gone from this piece to this piece. And then here, all I'm doing is because these are the epsilon i's, which I we've assumed independence in the strongest way, I can move to a product. We will do this throughout the course. A lot of machine learning is based on iid because it's an OK assumption. Does that mean the errors are not correlated? No, no. Of course, they're correlated, but we're not modeling it. That's all it means. It's not true. It's just a good model. OK, great. Now, I substitute in more thing. Equals product i 1 to n, and then I'm just going to write out the distribution sigma 2pi times x of-- so y, oh sorry. I maybe forget the minus sign, yi minus theta xi squared over 2 sigma squared. Why did I do this? All right. So I just wrote this whole thing out. Whoops. I just wrote this whole thing out. Now, the reason is we don't use this, so minimizing this seems like a nightmare. And so what we do instead is we use a simple transformation of this, which will make it nice and additive, which is called the log likelihood. All right. Now, I want to make sure of one thing. Let me go back. There should be a minus here. I messed that up. This has to be a positive square, otherwise it will spiral off to infinity. I'm sorry, it's in my lizard brain. I messed that up. Clear enough? Epsilon here. All right. Otherwise, it's the wrong sheet. Yeah, please? Two questions. One, is there a negative in the original formula also? Yes. It's here. What about the one above it? Oh, yes. I made a mistake. Not in the notes. Secondly, what does ex mean? Oh, exponential function. So it just means this character here. Sorry about this, x of x. It may be more familiar to you as e to the x. That's the e to the power of x. And x is everything in the bracket? Everything in the bracket. Yes. But I thought the original function was appropriate because effectively, we're competing in a negative [INAUDIBLE] by doing the real minus the predicted rather. No, it's squared. So this character's squared. Oh, yeah. Great point. It's my mistake. Awesome. Is that clear? I want make sure that's clear. It's a small detail, and it is in the notes maybe. I don't know. All right. OK. So far so good. Wonderful. So we have this function with all of my bugs that I've introduced that we're catching on the fly, which is awesome. And we're going to introduce a new function. It's going to be the log of our old function. Say, why are you doing that? And the reason is what does log do for x functions? Well, it brings the contents down, which is nice. And it also separates out things that are product. So we have a big product, we take a log, we turn into a big sum. That some looks a little bit more like what we were expecting intuitively from our least squares. Let me write it here. So it's sum i equals 1 to n because we turn this product into a sum 1 over sigma xi squared over 2 sigma. OK. I just took the log of the exponential. Please. [INAUDIBLE] log over sigma-- Oh, log sigma. Excellent point. Let me move this. Oops. This should be logs again. You can see where there are typos and things I don't care about. That term's going to disappear in a second, but awesome find. Yeah. OK, cool. So what do I care about here? The thing I was just about to say is, this term doesn't depend in any way on theta, right? And so remember when we talked about minimizing the loss function, we're like, oh, if I added a constant, it didn't matter. And this is a constant, the sigma squared that doesn't depend on my data anyway. So I can just toss it away. In contrast, this thing very much does depend on my data and theta. On data and theta. Is that clear? Please. Do you have a summation [INAUDIBLE]?? So think about it like this. I'm sorry. Awesome questions. Yeah, wonderful. So now, what does that mean? If I want to find the most likely function, that corresponds to doing what? I claim it corresponds to max over theta of theta. Why is that the case? Well, log is a monotone transformation, right? So this original thing, I wanted to maximize the probability, log is monotone. Looks like this and all the rest. And so log of theta is the same as maximizing that. Then this term is just a constant we talked about how that doesn't really matter too much. So I can drop that term. And then I have a minus here, so it's the same as minimizing over theta 1 over 2 sum i 1 to n yi minus theta xi squared. And then you say, well, what about that sigma squared, what happened to it? Well, as we talked about last time, it doesn't matter if you scale the loss by a constant. It's still the same minimizer, we don't care about the value. And what is this character? That's least squares. And we call this thing j theta. This was j theta in the last lecture. OK. So what was important here? I walked through this fairly slowly. And the reason I wanted to walk through it like this and you should run through this is because we're going to run this same playbook again and again, we're going to talk about what the error is for these linear models, then we're going to try and reduce them to this likelihood computation-- whoops-- this likelihood computation. We'll almost always use the log because it turns it into an additive problem. And remember stochastic gradient descent likes to work on additive problems. This is of a nice form that we like to deal with and then we solve the underlying equation. And so there'll be kind of this mapping that I give you a distribution and then out comes a loss function. And that's going to be nearly automatic after this lecture and the next lecture to be able to do that for a pretty wide class of models. And then how do we solve them? It turns out we're going to solve them all the same way. Awesome. OK. So is that clear? Is the probabilistic interpretation of least squares or fitting a line clear? Please just go ahead. [INAUDIBLE] Yeah, great question. So if you had sigma squared here, I'm not going to show you how to fit this right now. But there's another model where you have it. This is called with known variance. This is what I called fixed design with known variance linear regression. If you also don't know the sigma squared, you have to learn that, too. And there's a parameter. Sigma comes out. It doesn't come out quite nicely to the least squared formulation. You have to do a little bit of extra work to estimate sigma, but you can do it. And I think it may be a homework problem. So I'm not going to tell you too much more about it. But it's not it's not complicated. Yeah. It shouldn't be complicated. But great question. For now, we're assuming sigma squared is given. You do not need to make that assumption. Please. Oh, sorry. I'm all set. Either way. All right. All good. All right. So at this point, we've gone through that interpretation. Let me make sure if there's anything else. Fantastic. All right. Let's talk about classification. A little primer on classification. This is where we are. We're going to talk about how classification works, why regression isn't the thing that we would necessarily want to do in this scenario, and then we're going to run the same playbook I assert to be able to solve the model. That is estimate the theta underneath the covers. All right. So here what is classification? What are we given? We're given, not surprisingly, xi's and yi's, no change so far. For i equal 1 to n. OK? But yi, we're going to work on binary classification in 0, 1. And the values of 0 and You could have minus 1 and 1. I actually prefer that because it makes some of the math a little bit nicer, but it's not what we're doing in the course. You could have just categorical values. They're discrete encodings of the variables. OK? Now, we often think in terminology, why I also like the minus 1. We call this often the negative class. This is just convention. There's no intrinsic meaning to these things. This is our model and this is the positive class. So a positive class could be we found the tumor. There is a tumor in this image versus a negative class is there's no tumor, or this is a cat, this is not a cat. We're doing binary right now. You can do multiclass, which we'll come to later, which is there's a cat, a dog, a pig, a horse. Right now, we're just doing two. Great. So you look at this data and you're like, oh OK, I plotted it. You told me there's 0, so we can use linear algebra and vector techniques and all the rest. So 0, 1, here are some. And you're like, oh, why don't you just fit a line. I should just fit a line, maybe the line kind of, I don't know, goes through here or something like this, and it's fine, right? And indeed for a lot of problems, if you run linear regression and just kind of say like, is it closer to 1 than you can get out a classifier. But it kind of feels a little bit weird, especially because your data, there's no reason it should be nicely clustered. What if there were a blue point all the way over here, or over here, or way over here? What's going to happen to your line? Well, it's going to start because it's fitting those residuals to go crazier and crazier if you like in this direction, right? Naturally, it's going to skew more and more towards more of the data. And so whatever decision boundary you kind of put there, you're going to get into kind of stranger and stranger situations. Now, that's just a motivation for why you want to treat something that's natively categorical. So let's go through the function here. And maybe you've seen this in a stats course already. This is logistic regression. So we're going to do one trick here over linear regression. Our hypothesis is going to generate something of x is going to live in 0, 1. So this is a graph here of 0, 1 that we're going to get to in one second. An h of theta of x is going to be written as g of theta tx, where which will equal 1 plus e to the minus theta tx over 1. OK? Now this function here g of z equals 1 over 1 plus e of z. This is called a link function. Sometimes it's also called an inverse link function. The literature goes back and forth. Doesn't really matter. So you say, why did you do this? Well, our model is still going to be linear in our features, but we're going to feed it through this nonlinearity. And that nonlinearity is all of a sudden going to make sure that it kind of saturates when it gets too big and saturates when it gets too small. So it's not going to have the behavior if we looked at our old data, right? Let's go back up to our old data. It's going to kind of have a function that looks more like this. Does that make sense? But at least at a high level. OK? So that's the intuition. OK/ And this function here has a special name. It's the sigmoid. So you may see that. If you use modern deep learning packages, you'll see sigmoids or things floating around. That's what they are. They're just this function that smooths it over. Now, you may ask, why don't I use a different link function? You could. There are lots of different link functions to use. This is by far the most popular for a variety of reasons. One is that you can turn it into kind of what they call probabilistic estimates, which we'll get to a little bit later. Please. What do you do when you have [INAUDIBLE]?? Yeah, let's get through a great question. How do you do multiclass? Let's first get through how we actually do the binary class. It's a great question. You can think about a standard way to do multiclass is to do what's called one versus all, if it bothers you, where you say, am I in class 1 or any other class, class and you can put them that way. There are more sophisticated schemes. There's a wonderful paper from 2024 that talks about how those more sophisticated schemes don't always pan out. And it's written in a very aggressive style, which I find interesting and entertaining, anyway, by a guy from the Media Lab. OK, cool. So at first, this looks weirdly motivated, but there's some motivation for it, which is just that it has this nice property. And it's smooth, and it looks close to a threshold function. Now, the other thing when I say it's smooth is we could also imagine the function that was like a step function, that you could use that. That seems like a natural thing. When you're below 0, then when you're negative return 0, when you're a positive return 1. That would be a thing you could do. And deep learning these are sometimes called-- it's a sine function. The problem is the derivatives would give you no information here if they were flat. So this is smooth, so it tells you a nice smooth transition, and that will work better with modern optimization. That's one thing we want out of it. Please. Can you please explain what h of x and g of z are with this function thing. Recall this is the same notation we had earlier. This is the hypothesis sub theta. This is how do we do prediction. So the way I do prediction in this model, in the logistic regression model is, you've given me theta, which is some parameters that you have. That chooses your model, then you give me some x, which recall is your data point, and then what I'm going to do is I'm going to produce a number between 0 and 1. And the way I'm going to produce it, is I'm going to run-- I'm going to take their dot product as I was doing before, and then I'm going to run it through this function. And I'll come in one second how we interpret those scores, but you can think about those scores as being closer to 1, means I'm confident it's in the class and closer to 0, means I'm not confident in the class. And I was saying is, this function looks like it was picked out of a hat, and it really wasn't. The reason it wasn't is, it has a couple of properties. It's smooth and it transitions nicely between 0 and 1. And I was trying to explain why those properties were important. And so that's where h theta links to this image. Does that make sense? Yeah. [INAUDIBLE] the g of theta transpose times x isn't actually equal to the thing to the right of it in the parentheses? It is. So if you look, g is this function here, but z is a scalar, right? And so it's just substituting in theta tx for z. Yeah, I just wrote it this way, so I'd have more room to write the numerator. And then here I wrote it 1 over because that's the more standard way to write it. They're equivalent. Great questions, please. What's the difference between sigmoid and a weak function? A link function is a general class of g that you could apply, that's some kind of nonlinearity, one that you may have seen if you ever played with a deep learning package or something called ReLU or rectified linear. It looks like this. So there are other functions that they're out there. There's probits, and logits, and all kinds of things. We're going to use this one. But I want you to be aware of it because I think you have to try one other link function on a homework. And the phrase is used in the literature, and it's very mysterious if you don't hear it first. Yeah. Very good questions. And sometimes it's also called an inverse link function. That's a separate issue. Cool. Awesome. All right. So how do we interpret those scores? Now, this is the twist that gets us into probabilistic modeling, which we're going to generalize. So we say the probability that y equals 1 according to the model is equal to h theta of x. Now, this is a testable statement. Now just to complete it also, what's the probability y equals zero, this is going to be proportional to x theta of 1 minus h theta of xy because probability's sum to 1. We only have two classes. Now, this is actually testable. If you took a bunch of data and put it through and look at the probabilities and binned them, right, so you took all the predictions that were between 0.5 and 0.6 and 0.6 and 0.7, and you counted them up, and every bucket, how many were accurate? This is testable. You can see if the model is what's called calibrated. And that's a very useful that the errors are meaningful. And so you can check that. It's more of a condition than the right or wrong. Now in modern machine learning, that's less important, but you will hear people talk about the probabilities or the scores that come out of these models and using those scores for something. And this is what they mean. They'll sometimes use the log of this, which is called the logit. OK. But that's how we interpret what the model tells us. That's why this link function is important that it's between 0 and 1. It doesn't damage optimization. It's not obvious, but it doesn't damage optimization, which is the other major thing. OK. So let's use that information to write our likelihood function. The probability of y, I'll emphasize that it's a vector this time, x-- I don't really like that notation, but it's OK-- is, well, why did we get here? Oops. Well, this is again the independence assumption rearing its ugly or not head, right? We're able to go from the entire data set to a product over all the terms. Nothing surprising there. And then we'll write this in one form which hopefully makes a little bit of sense. xi, this seems like a cheat, but it's actually OK, 1 minus h of theta. So why does this seem like a cheat? It's a weird way to do it. But it'll become nice in a second. So what I'm writing here is I'm saying the probability is the probability I said that it was true. Now, what is yi? When y is 1, I select this term because this term is 0. So think about when y is 1. This character is 1, this character is 0. So this goes to 1, and this is the only term that matters. When the true label is When it's 0, I should say when it's 0, this term is 1 and then this term goes away. Does that make sense? Just think through the cases y as 0 or y as 1. So far so good? So it's like encoding both simultaneously. I get to see yi, so really, only one is present. But it just makes my arithmetic a little bit cleaner below. So that encoding makes sense? Cool. Great. All right. So let's take the log of l theta. We're doing exactly the same thing that we did before, 1 to n. Now, I'm going to write this out. It's 1iy log h theta xi plus 1 minus yi log 1 minus h theta xi. OK. So so far so good. But now notice, this is in exactly the form that I need for SGD to run. That's pretty wild. This is just a sum over everything. I can just write gradient descent or anything else I want in terms of the thetas, and I'm all good. These are functions underneath the covers. Now, one other thing, which I won't arrive, but you can see very easily from this-- so just to be clear, same recipe. I want to write down what I mean by same recipe. We have theta t plus 1 equals theta t minus alpha theta i t theta. Now, when we do this, something-- oh, so right now, actually, we're sorry. We're going to do gradient ascent because this is still maximizing probability. We haven't pulled out a negative term. Sorry about that. But one interesting thing pops out. So you should verify this. We'll see if we can do it. We won't do it in class, but you should see if you can do this. Theta j of l theta equals the sum, i goes from 1 to n, of yi minus h theta xi times xi j. So this is pretty miraculous, if you look at this. What it says is, if I look actually at this underlying function and I take the derivative, it comes out in exactly the same form that we had when we were doing linear regression. It's your prediction error times the xi. Now the prediction itself is different, right? Before, the prediction was just theta dot xi. Now it's this h function. But this gives me something that-- these models are very, very similar, right? They're like, how much error do I make, then take a gradient step with respect to the data that tries to minimize that error. And so this is the sense in which I mean. These models really all are like all the same. We're twisting these pieces at the edge for how we model things. And so that means actually after you pop all this stuff out, you can use exactly the same rule. And this rule is extremely general. And that's surprising. This rule of-- I just take my predictor, and I do the derivative, that's shocking that a large class of models. And in fact, that's what we're going to generalize in the next lecture to make sure that we understand exactly the breadth of that. Any questions to this part of what we're talking about? Please. [INAUDIBLE] Oh, right, great question. So remember, the reason that we got this minus sign here? Sorry to go back. Look. The minus sign was our nemesis the last time that popped out. And so this turned our max into a min, right? We didn't have a minus sign here and we were maximizing the loss that we put in, and I didn't-- oops-- I didn't change anything. So we never had a minus sign pop out of here. But when you actually go through and see it, a minus sign does pop out. You have to take my word for it, or you just do the calculations here. But here, that's why we have gradient descent, because we're maximizing the loss, not minimizing the underlying function. Please. What was the subscript on the [INAUDIBLE]?? Oh, this is xj is the jth component. So here, I did this j's are the same. So I was clicking the derivative with respect to the jth component of theta. And so that's the underlying derivative. Same way if you remember in the linear regression. We calculated the derivative with respect to each component independently. That's the j component of the i-th sample. Exactly right. Well, this is exactly right. Yeah, I don't have anything to add. Yeah. [INAUDIBLE] Oh, great question. yi is a label. And here, that's what I was saying. I said it in an extremely confusing way for some reason. So yi is fixed. I know yi. I get to see yi, it's a label. So when I have yi is equal to 0, then this term is 0, so this thing is just 1. And this is the term that's inside. It's like a switch statement. When yi is 1, which I get to see, right, for the values that it's 1, then this statement goes away. And I have only this character, if I could draw faster with colors. That's a terrible color. Does that make sense? So it switches between both based on what yi is. It's just a compact encoding of both cases. That's why it's a little bit awkward. Yeah, great question. Please. [INAUDIBLE] So we compute it. So the thing here is this derivative here, this log, and I'll make sure this is clear. We want to-- this I'm asserting, I haven't shown you this. But the way you compute it is you take the derivative, you put it inside, you say, OK, yi doesn't depend on theta. This term does. You compute the derivative of this character internally, and then that is what I'm saying you can simplify it to down here. But you compute it. That's up to you to do. Once the model is in this form, you just use the rules of calculus to compute it. Nothing fancy. Yeah, please. There's a few problems like this. Obviously, it's all [INAUDIBLE]. Yeah, exactly. This follows this notation I will use reflexively without thinking. This is the log likelihood. Let me change colors. This is the log likelihood. And this is the likelihood. OK? This is on the probabilities. This is on the logs of them. Great questions. Cool. Please. [INAUDIBLE] It's the log likelihood. It's a little l. So we'll almost always do gradient descent. So one rule of the course, or not rule, but one thing is, you typically use the log likelihoods for a variety of reasons. But one is that they're nice for optimization. And so that's what this link is meant to show you. Very cool. Awesome. I will pause a second. All right. So at this phase right now, we've seen another model. What we're going to see next lecture is we're going to generalize this with a little bit more math. And so the thing is, maybe it's a terrible, terrible metaphor, but a frog with boiling water, or whatever. But you're getting more complexity and you're not noticing it, right? We started at lines. I know how to fit a line, then we had some probability distributions. OK, they came in, and then we started these predictors that were actually, instead of just giving you a value of regression, they were actually giving you a probability. We interpreted those as log likelihoods. Now we're going to make-- next lecture, we're going to make the probabilities more complex. And that's going to allow us to generalize. Before we do that, I want to show you one other thing, which is the Newton's method, which is another solution method so that we can compare and contrast with stochastic gradient descent and give you a chance to ask questions. So we were a little rushed at the end of last lecture because I screwed up. OK. Sound good? All right. OK. So let's talk about Newton's method. We're now talking about-- forget about your modeling side, now we're talking about optimization, right? So Newton's method is the following. We're going to be given some f from Rd to d. It's got to be a scalar out at this point, actually. And what we want to do is we want to find f of x equals to This is in general a hard and intractable problem. Sometimes it will work, sometimes it won't work. If it were really nasty function, if it's continuous, great. So why does this have anything to do with what we care about? Just as an aside. Remember your thinking here. If you want to minimize, say, l theta and it's convex or has a nice shape, that's the same as l prime theta equals 0, sorry if that's too small, right? So if I want to minimize something, it's the same as finding the roots of its derivative, right? Assuming it's convex both shape. So they're related clearly. All right. So how does this thing actually work? So the idea here is-- and probably you've seen this method at some point. Maybe in a calculus class or somewhere, and it's a good method. But it has trouble with machine learning, and I want to talk about why. OK. So here we have theta 0. We take our guess f of theta OK? Remember the derivative in this case because it's one-dimensional is the direction of maximal increase, right? That's the way the function is increasing. That's what the derivative is actually giving us. Now, what we're going to do is we're going to follow the derivative to where it crosses the axis. So our guess is going to be theta 1. Now, you should kind of convince yourself-- it's not true everywhere, but almost always, if you think about picture a function in your head that crosses 0, this is going to be a pretty interesting way to find the 0's and to get closer. In fact, this method is insanely fast for a large class of functions. It's what's called quadratically faster, which we'll emphasize again. But it means you get twice the number of digits of precision as you run. It's wildly fast, OK, when it runs in terms of steps that it takes. OK. So this distance we want to call delta. So theta 1 is going to be equal to theta 0 minus delta. What is delta? How far do we step? Then we have an algorithm here, right? And then we'll repeat it, right, just to be clear. We would go up here, we would compute another derivative, and so on. And we would zoom in on this. This would be our theta 1 or theta 2. OK? So we have to solve this key step. So how big is this? Well, if we look at it, f of theta 0 equals f prime of theta 0 times delta. It's just a triangle. Rise over run. It's all I'm doing. That means delta equals f prime of theta 0, which I'll write in an obfuscated way, times f theta of 0. Oops, that looks terrible. Let me erase that. There you go. Inverse. OK. So I have to do an inversion. OK? So this gives us the rule theta t plus 1 equals theta t minus f of theta t over f of theta t prime. This is our route finding algorithm. And as I said, this thing converges crazy, crazy fast, right? If you go to 0.1, then the next iteration will be 0.01. This is error going to be 0.0001. This is the error. This is what quadratic speed means. That's insane. You don't have that many digits on your device. It'll get to machine precision very, very quickly. Now, this algorithm looks great. In one dimensions, it's quite good. But there's a problem with it when we scale up to higher dimensions. And the problem when we scale up to higher dimensions is right here. The way that you write the higher dimensional version of this rule is theta t plus 1 equals theta t minus, and I'm going to write the typical way we do this, h inverse gradient, and I'm going to put it in the way that we would use it. So what have I done here? Let me unpack this. It's a little bit obtuse. So when we want to generalize to vectors, when we want to generalize and use for minimization, we get here. So theta is remember our friend in Rd plus 1. L theta becomes our f theta as we were using it above, right? I've written it as a gradient with respect to this. This thing here, if you remember your calculus, this is the Hessian. How big is that thing? Well, it's in d plus All right. This thing is small. This thing is not small. This thing is small. This is an Rd. Now, one thing that is thing definitely-- and by the way, if you remember what the Hessian is, Hij equals, in this case, I'll write it as l theta, is the matrix of second partial derivatives. All right. So it's all the mixed partial derivatives. Do you remember this from your calculus class? If not, don't worry, I'm sure a brief refresh will be fine. A couple of things that are great about this algorithm. First is, notice there's nothing there. There's no step size. There's no alpha. This thing just runs. This is a great algorithm. In machine learning antiquity, 2003 and '04, 2006, people use this algorithm because it carried over from statisticians. This is how statisticians would solve logistic regression. So if you go into R, I think, up until very recently, maybe even still, and you say solve logistic regression, it will use this algorithm under the coverage. And the reason is, it will get super, super accurate, right? It'll get all that-- fill up all your digits and be very, very efficient in terms of how many steps it takes. But each one of those steps for a machine learning problem could blow out your memory. If you imagine you have a machine learning model that has a billion parameters, a billion squared is a lot. It's huge. It will blow out your system. And so people have ways of relaxing this over time that they try to get more information in, but it hasn't historically been worth it. OK. So does this algorithm make sense? You recall this algorithm? Happy to answer questions about it. Cool. All right. So let's do a rough comparison. And if you want, please ask questions about anything, I guess, but relevant fine. I think I have a chart for this. OK. So let's look back at the methods we've seen because I want to put them in context. We saw this SGD algorithm, pure SGD. Every iteration took one data point. We looked. So I want to compare the method just so we're clear on the method name. How much they cost per iteration? That is every time I take a step and change the model. That's what I mean by per iteration. How much compute do they do as a result of that per iteration? And how many steps do they take to the error, to get the error, right? This is kind of the convergence tradeoff. So SGD has a pretty bad estimate of the underlying gradient, but you can go super fast relative to the size of the model, so you take many of them, and you make up for it in some situations. So let's see this. So the compute here, this is proportional to d. It does not depend on the size of your data set. There are situations, where you can train these models, you don't even see all of your data. You only sample a small-- actually, there are models that people pay money for that they have huge, huge collections of data, and they only ran on the first 30% of it, and they released the model there. They're like, hey, we sampled from it. It was fine. It was fast enough. If you ran even a single episode of batch gradient descent, batch gradient descent, you would have to look at all the data points, which would potentially be much, much slower. OK? So it takes time at least-- oh, and put theta here, although, I don't really mean them formally, OK? And then there was Newton's method. Newton's method also looks at all data points. It's extremely expensive. We won't talk about how expensive-- you can get it slightly down from this, and I've written papers with other people and lots of people have tried to improve this method. But it literally-- it's huge, it has this d squared is going to kill you, OK? You can try and get around it because you have to compute the interaction. Remember these partial differentials here? These are the interactions between every pair of variables that you have. That's a lot of information quadratic anymore. So that's where their d squared is, OK? Now, these things are super fast, and I'll be a little bit glib here, because I'm not going to state the true precise running time. This thing is really fast. If you want to get to epsilon error, you take log 1 over epsilon steps, alright potentially a little bit less than that, too, but it's fine. That's super fast, OK/ You have an epsilon of 10 to the minus 16, log of 10 to the minus 16 is take a couple of hundred steps and you're done. That's wild. SGD, a couple of hundred steps, it's likely still spitting out random values. In contrast, at the other extreme, this is epsilon to the minus These are very vague notions. This is only under some situations. I just want to give you engineers intuition of how well these work. And the point is that there's a clear tradeoff here of how expensive each one of these points are versus how many steps you have to take. So if you had a computing device that made it absolutely instantaneous to look at all N data points simultaneously, then maybe batch gradient descent makes sense because you would just take steps really, really fast. If you had an Oracle that could compute Hessians, right, which is what people tried to do for a while, and compute them really, really quickly, then you would prefer this algorithm, right? So it's a trade-off between size and speed. Now, we tend to operate as machine learners in situations increasingly, whether it's a good idea or not. I happen to like the idea of huge, huge models, trillions of parameters are the new thing people care about. It's wild. Computing a trillion squared-- if you thought a billion squared was big, a trillion squared is bigger, but about a factor of a million. It's huge. So we can't run on those kinds of models. And we tend to train on data sets that are much, much larger over time. Now that how much larger means changes every generation of hardware every two to four years, what we mean by that changes. But we train on the web, all emissions, or all of the-- we still can't train on all the video. There's more video that's put out there than we can possibly train on. We would like to be able to do that. But eventually, hardware will catch up, and hopefully, the same dumb algorithms will work. That's what we're praying for right now. We have no justification for that statement. OK? Now, the one thing that I elided last time, this is g, the one thing I alighted last time was there's a little character that squeezes in here called mini batch. We talked about this very briefly. What mini batch does is instead of selecting one points, it randomly selects B points. Now, its estimate is somehow better than SGD, but not kind of theoretically. It doesn't change the curve. The point is for modern machine learning, you can do a sample of B things in parallel in the same wall clock time that you can do one. And that's what's distorted us to use these batch methods really candidly. There's a little bit of error reduction in the noise. You get a better estimate of the gradient. But really, it's because it's free for the compute device. The way a GPU works or any of these kind of batch parallel systems, you put in d points, they can do them all in parallel. So that's the thing that's lurking under the covers. Because after the lecture people ask me, well, why would you prefer that? You're still taking the same number of steps and batch gradient versus SGD. And it's because of this parallelism that's underneath the covers. We've built big parallel machines humanity, right? Your phone has an ungodly number of teraflops in it, supercomputer level teraflops some number of years ago. And we'll continue on that thing so that you can get your photos tagged. I mean, I don't know. That's how it works. Anyway. So those are why we do mini batch. So far so good? All right. Any other questions? All right. So the last thing I'll just put on the thing here is in classical stats, these were all things that people cared about. Classical stats d was really small. And n was moderate size. If you look at where-- if you talk to your friends and the social scientists who are maybe they're not doing the same thing now, but for a while ago when they would solve these models, when they would solve these models, d would be 100, right? And they really cared what the responses were down to very, very fine levels. You have to run for a really long time to get that level of accuracy for SGD. What machine learning is about, in a really fundamental way is taking these bigger models and solving them approximately. And weirdly enough, they end up pretty robust, which is something horrifying. We don't understand it. Please. What does the omega field mean? Oh, yeah. So it means at least. So I mean the big O style notation. It means asymptotically, it grows at least this fast. It's a little bit slower, but I don't want to get into it. Wonderful question. Please. What about all this a few algorithms [INAUDIBLE]?? Awesome question. So one access that you could also look on here is, how well do they handle noise in the data? And SGD turns out to be remarkably robust. And there are some versions where you can prove this. So when you're optimizing-- So there's folklore around this. We'll talk a little bit about this. But SGD, because it's noisy, there's some belief that it doesn't get stuck in local minima as frequently as some of the other algorithms do. If you imagine this picture right here, imagine that it went back up, then somehow you're using all the second order information to race you down to the closest local minima, that's potentially not what you wanted the entire time. So there's some folklore theory that says SGD is a little bit better. Now I say folklore because we can only nail that down in some cases. They're basically theorems that say of the form like, if your data looks like this, then this happens. Or for certain things, I'll show you in a couple of weeks, this happens for solving certain matrix equations. You can prove that it happens there. So that's another axis. The other axis, which you may think about if you're an optimizer is, how numerically stable is the underlying algorithm? And here's the thing that's pretty wild about machine learning. The trend has been not to make more numerically stable things. So if you care about how a computer works, you have doubles inside double precisions, floating point numbers. Now if you saw NVIDIA's last announcement, they're going down to FP8 which, means instead of 64 bits for a number, they're using only 8 bits. There are people right now training with integers. Those methods, we really only know how to do over SGD, because these methods, you can't get enough meaningful information in there. So there's another argument about how statistically robust they are and how numerically stable they are. They're not very numerically stable. There's a lot of tricks. We're pretty primitive there compared to the optimizers of the world. But yeah, wonderful questions. Awesome. Fantastic. Any other questions? OK, great. So we're going to end a little bit early today. What we're going to do on the and know in general, you have to stay till 4:45. But today, we got through it. Next time, we're going to talk about our exponential models. These are going to be models that have a more complex link function and allow us to model more of the world that's around us and interesting noise things. See you. Have fun. I'll stick around for a couple of questions. |
Stanford_CS229_Machine_Learning_I_Spring_2022 | Stanford_CS229_Machine_Learning_I_Kernels_I_2022_I_Lecture_7.txt | Hey, everyone. I guess let's get started. Please take a seat. So today we're going to talk about kernel method. So I'm going to take a sec, define what it is. So in some sense, this is a general technique for dealing with nonlinearity in your data. So we're going to see more other general techniques like neural networks. But this is one of the techniques that we will deal with nonlinearity before deep learning took off. And we will introduce this technique mostly in the setting of supervised learning in the kind of discriminative algorithm type of settings. But these kind of techniques can also applies to enterprise learning or other settings, just with similar type of ideas. OK, so I guess, let me start by-- I guess start with a random example. So suppose you have some data. I guess I'm thinking about supervised learning-- just a simple regression settings, where you have some x and you have some y. And you have some data-- something like this. I'm just drawing something. I'm making up some data. And in the first two weeks, I think Chris talked about, you fit a linear regression, right? So that means that you have to fit some linear model on top of this, so. But you can see that I designed this data such that a linear model is probably not correct. So apparently, in this data, you may want to use something like maybe a cubic polynomial. And if you have done the homework, I think there's one more question, which actually also talks about how do you deal with this kind of situation where your data is not linear in x? So you shouldn't assume a linear model in x. The homework is still this Wednesday, right? Yeah. So it's one of the last homework questions. But you don't have to know exactly the homework questions. I'm going to actually basically kind of review the homework questions to some extent. So how do you deal with this kind of case, right? So a simple way to do it is the following. So you say I'm going to have a cubic polynomial to fit this data. So basically, I'm going to assume my model is something like h theta of x is a function of x power matches by theta. So it's something like theta 3 times x cubed plus theta squared times x squared-- sorry, theta 2 times x squared plus theta 1 x1 plus theta 0. So here, my x is a real number just for the sake of simplicity. So I have this nonlinear model. And sorry, any questions? OK. So you fit this, right? So how do you deal with this? So one thing to realize is that either you have such a model that is nonlinear in x. But actually, this is linear in theta, so linear function of theta. It's a nonlinear function of x. But actually, when you really do linear regression, it really doesn't matter whether it's linear in x or not. What really matter is whether you are linear in your parameter because you are optimizing over the parameter space. So in other words, more explicitly, so what you can do is that you can turn this into a linear, a standard linear regression problem by doing a simple change. So you can say that I'm going to define this function, phi of x, to be this function that maps x to this vector. So this function, maybe I should've called it phi of-- maybe let's just call it phi. Phi is a function that maps R to R4. And then you can rewrite this nonlinear model, nonlinear x to be something more like a linear model. This is almost exactly the same as the linear model we have talked about, right? So this is a vector. Theta 1's something like this times 1, x, x squared, x cubed, which you can write this as theta transpose times phi of x. So in some sense, after you do this simple rewriting, right, now h theta x is linear in theta, is still linear in theta. And also, it's linear in the coordinate of phi of x. So this is how you make everything linear. And then you can, in some sense, invoke the linear regression algorithm that you have used. What does that mean? That means that you basically can just view this as your new input. So before, input was x. And now, input becomes higher dimensional. Before, input was one-dimensional x, right? And now, input becomes phi of x, which is a four-dimensional vector. And that's it. So in other words, suppose before we have a data set, which is x1, y1, up to xn, yn, where x is a scalar-- the input is scalar-- y i is always a scalar, I say. And now, in some sense, you just turn this into-- you can turn this into a new data set. And this new data set is going to be-- the input will be phi of x i. And the label or the output is the same. Of course, your input dimensionality becomes 4 now instead of 1. But we are able to deal with high-dimensional input for linear regression. So basically, you can just now view this as our new inputs. Or sometimes people call it features. I guess, probably, we have talked about this. Features sometimes is a word that has multiple or similar meanings in machine learning. So in many cases, feature just means inputs. But I'll clarify in a later set of situations. Sometimes features means slightly different things. But sometimes you can just say this is a new input. This is a new output. The new output will be the same. And you just work on linear regression on a new data set. And that's how you learn a cubic function for this data set-- by turning it back and reducing it to a linear regression problem. And I think this is a piece that, one, I think it's Q4. But maybe we change the order of the questions a little bit. Right, any question so far? How do you exactly get to know the data? This might actually be better if you apply that phi function. Do we just experiment on all of the data to see? How do I know this is better than the what? Yeah, how do we know that it might be better to apply to this transformation of the data? Than no transformation? Yep. OK, so yeah, so I guess it's just-- of course, eventually, you have to validate your method and see how it works. But the intuition-- so here, I'm not saying it's better. I'm just saying that if you believe that you should use a cubic function, this is how you implement it, right? So whether you believe that you should use a cubic function instead of a linear function, that depends on the property of data. Maybe you should experiment with it. You should probably look at data to see whether it is a linear function. There are other ways to decide which model we're going to use. I think, actually, in two or three lectures later, we are going to talk about how do you pick different models. OK. Yeah. OK, so yeah, so exactly. So this is a way to implement a cubic function at least for one dimension, OK? And what you can do is the following. So how do you proceed, right? So you basically just repeat what we have known about linear regression-- you just write down algorithm on this data set, right, and implement a linear regression algorithm on this data set. I'm going to repeat the procedures. But this is basically just exactly-- you just invoke what we have learned from the first two weeks. But I'm going to repeat it because this part, I'm going to use it. We use it again to demonstrate something else, to demonstrate the current metric. So how do we redo this? So if you do linear regression on this new data set-- so guys, let's say we use gradient descent, OK? So let's use gradient descent. And so on the loss function, something like this, right? You compare your label with-- you know, suppose you compare a label with the model, which is phi of x i and squared. Recall that if you do the standard linear regression, what will happen is that for the standard case, then this will be x i. And this will be x i in the most standard case. And what I'm doing here is just replacing that x i by phi of x i. And then, if you work with it, and you said, what's the formula, the formula would be something like-- we are going to have a loop, right? And each time, you are updating your theta by theta plus some learning rate times the gradient. And the gradient, if you compute it, it will be something like y i minus theta transpose phi of x i times phi of x i. And for the standard case then, this two things will just be both x i. If you look up the lecture notes, they will be just both x i. And now they becomes this transformed version of this. I guess the [? question, ?] this notation, like this is supposed to be that you update your theta by theta plus this. Does the notation make sense? Yeah, OK. Sounds good. So in some sense, it's just a shorthand. If you really want to be very-- if you read it in a mathematical paper, maybe they will call it theta t plus y is equal to theta t plus this. [INAUDIBLE] x i if you put all of that. Is it the same? Oh, I'm saying that in a standard case, this will be x i, in the first two weeks. Now I just replaced it, yeah. All right, OK, so. And there is some small differences, which is that if in the standard case-- so now, what's the dimension of theta? The dimension of theta is now the same as the dimension of the x, right? So suppose you say x is of dimension Rd-- maybe I should use black color just to be more consistent, so-- and phi of x is a function that maps Rd to some other dimension. It doesn't have to be exactly the same dimension. But before, we map from one dimension to four dimension. But you can, from any dimension to another dimension. So let's call this dimension p. So p is the dimension of the so-called the new feature, the new inputs. And that means that your theta also has to be in the same space as phi because your theta will be a linear combination of the transformed inputs. So theta is also living in this space. So basically, we are updating in this p-dimensional space instead of x-dimensional space. And let me also just briefly talk about the terminologies. So I think this is often called a feature map, so. And this phi of x-- maybe I think I should call this something like this. Maybe the right notation should be this. So phi, as a function, it maps from Rd to Rp. And phi of x, I think people often call this, at least in this context, call these features. And if you look at this kind of like papers or research, in this context, often, people call x the attributes just to distinguish it from the features. The terminology doesn't really matter that much. In most of cases, you can infer the terms if you really know what they mean, so. But I'm just giving you some kind of context when, if you read the paper which says that x is the attributes and phi of x, the features, then you know what they mean. But sometimes other papers would call it differently. Maybe some people would call x raw features and call these features. There are multiple ways to name them, which is a little unfortunate. OK, anyway, but feature map-- I think people always call this feature map. Maybe sometimes we call it this feature extractor. But they all mean the same thing. And regarding this word features, I guess maybe let's just-- so my extrapolation from reading all of these papers, I realized that features, even though sometimes they mean different things, often, at least from my statistical learning, inference of this word, I think this word seems to mean-- it always means that something that you use a linear function on top of this. So basically, if there is a linear thing on top of something, then that something will be called features. So basically, it often refers to those variables in which your model is linear, yeah. OK, so are we done, right? So this sounds great, right? So everything is so clear and simple. Nothing seems to be complicated. You just replace x i by phi of x i. But we are not done. So why? Because in some cases, this is great. For example, in the example of the homework question, if you look at it, we basically ask you exactly implement something like this. I think we ask you to implement a different algorithm. We didn't ask you to implement the gradient descent. Instead, we ask you to implement the-- you use the exact solver. You use the inverse of the tan, some matrix, some gradients kind of things, but the same thing, where you just apply our existing algorithm on this new data set. But this is not done because of the efficiency issue. But I think I saw a question. Yeah? So why is it [INAUDIBLE]? So why is 1/2 here? Yeah. I think this is so that the gradient doesn't have 1/2. It doesn't really matter. But if you have 1/2 in front of the grid, there's no 2. It's just convention, yeah. I think if you read the first two weeks, also, we have a 1/2. Typically, there is 1/2. OK, so-- all right. OK, so why we are not done. The reason is that sometimes there's a efficiency issue with this kind of approach. And the reason is that this feature map can be very high-dimensional. So p can be very large in some cases. For example, suppose in this case, p is 4, which sounds pretty fun. But what if your x is high-dimensional? So here, x is one-dimensional. And p, the features, are four-dimensional. But what if you say I'm going to have x1 up to xd, right? I have d columns in my row data. And now if I want to create something like this, that have all the cubic monomials of the coordinates, then what you have do is that you're going to have something like phi of x needs to be something like a gigantic vector, where maybe you have 1 here. You have x1, x2-- all the degree 1 monomials of the coordinates. And then you have the degree 2 thing, x1 x2. Maybe we should start with x1 squared-- x1 times x1, x1 x2, up to x1 xd, and then x2 x1, so on and so forth. Maybe x2 x1 can be omitted because there is a repetition. But it doesn't really change the point. So you're going to need to have a very long vector that list all the possible combinations, the products of this coordinates. So you're going to have maybe x1 cubed up to eventually, xd cubed, something like this, so. And only when you have this, then you can claim all the degree 3 polynomials can be written as a linear function of these features, right? And then what is p? What's the dimensionality of this new features? So if you count it-- so I'm going to do some rough counting. So this one, there's 1. And for this degree 1 thing, you have d of this, right? And for degree 2 thing, I guess-- so you're going to have degree 2 thing maybe. So they are d squared of this. Let's say we ignore the repetitions. The repetitions only a change a constant factor, which doesn't really change the point. So you have d squared. And then for degree 3 thing, you're going to start from x1, x1 cubed to xd cubed. And with all the combinations, you're going to have d cubed of these terms. I guess this is a little bit-- OK. Right, so then this means that even though-- of course-- the good thing is that every degree 3 polynomial can be written as theta transpose phi of x. But the bad news is that phi of x is in this Rp, where p is something like d cubed. Technically, p is, in this case-- if I do this p is equal to 1 plus d-- plus d squared plus d cubed, the dominating term is d cubed. And this is very high-dimensional vector. So if you think about this-- for example, let's say maybe suppose it's just some back-of-envelope calculation. Suppose d is 10 to the third-- say, 1,000-- then p will be a billion, so. And why this is a problem, this is a problem because if you run algorithm, this algorithm, then how many iterations, how much computation you have to pay here? So if we look at this, I guess if you count how much computation we have to pay, this product requires O of p computation, O of p so you are taking inner product of two vectors, two p-dimensional vectors. You have to pay p numerical operations, p multiplications. And you also have to sum. The sum also has p operations. So it's O of p operations for this in the product. And then it becomes scalar. And you take the difference between this and this. That's fine. And then you multiply by this, which also-- this is scalar times this, which still takes O of p operations. So evaluating this whole thing takes O of p operations. That's still kind of OK. But you have to have-- take the sum. And that's a multiplicative factor because you have to do it repeatedly. So totally evaluating the whole thing takes O of np multiplications or additions. So you have to take O of n times p time. So basically, the runtime depends a lot on p. It's linear in p. And if p is something like 10 to the 9, then your runtime will be 10 to the 9 times the number of samples, which is often prohibitive. So this is very, very slow empirically if you really just implement this, when d is a solid. Any question so far? OK, so the main goal of this lecture is to find out a way to speed up this algorithm. That's the kernel trick. What's the kernel trick about? It's about how to implement this nonlinear model using this idea but implementing it in a way such that you can be computationally efficient. So the final algorithm will be equivalent to this algorithm but will just be faster and actually much, much faster. So we're don't going to manipulate [INAUDIBLE] to be enough to focus on that [INAUDIBLE].. And phi is going to stay the same. We are going to try to implement this algorithm in some other ways. But you'll see phi will-- I'm not sure whether I would say yes to-- we will do something with phi, yeah. Right, so the whole point of the rest of the lecture is to have a faster algorithm and maybe just a side philosophical remark. I think machine learning is really a lot about-- machine learning is about computational efficiency, even though these days, sometimes you can use GPUs, so. But I think at the heart, at least a good fraction of machine learning is about computational efficiency because many of these kind of statistical questions, in some sense, you can say they are studied well in some sense, like in statistics. And at least in machine learning, I think, at least comparatively, I think we focus a little bit more on computational part. But of course, computational is only one part, right? So you also have to care about statistical perspective. But I'm just saying that computation is important. It's not just a merely or a separate thing that you can resort to an oracle. So in many things, you have to really think about computation because otherwise, you cannot implement the algorithm. You cannot run your algorithm on a large dataset. Then you wouldn't see good results. And this is one way to speed up things. And we are going to see other ways. OK, so how do we speed up this. So here is a key observation to speed up this. So the observation is that the theta can always be represented as-- you may find this a little bit kind of surprising on the first side. But I'm going to explain. So this vector theta can always be represented as a linear combination of the features for some scalar. Or this beta i is-- the scalar is for the linear combination for some beta 1 up to beta n. And each of these beta is in R. And I guess there's a condition here-- so if theta 0 is 0. So I guess I'm going to change just for the sake of simplicity. I'm just going to decide my initialization for this algorithm is that theta is equal to 0, is initialized to 0. So then I'm not going to have this condition. So I'm going to claim-- my algorithm-- I'm going to say that my algorithm just start with theta is 0. In initialization, probably doesn't matter that much because either way, you're going to update a lot of times, so. So if you start with 0, then in this algorithm, no matter what time, theta can always be represented as a linear combination of our features. And why this is useful, why will this be useful? You will see roughly speaking, the reason why this is useful is because theta is a p-dimensional vector, right? And I'm in a regime where p is probably-- I'm in a regime where regime is that p is larger than n. So let's say p is really, really big, extremely big, right, even bigger than n. So theta is a p-dimensional vector, which is extremely big. But now you represent this vector by the scalars. And you have n scalars. So in some sense, you reduce the degree of freedom in your representation at least, right? So before, we have to represent p numbers. But now you only have to store n numbers if this claim is true. So in some sense, you will see that-- I will tell you more about details. In some sense, you'll see that the way we speed up this is that we never store theta explicitly. We always store the representations of theta, where the beta i is as my surrogate for theta. And that's enough for many of my computation. You create a [INAUDIBLE] vector, [INAUDIBLE] isn't that the scalar? Isn't the sum there is scalar? This is a scalar. But this is a vector, right? This is a p-dimensional vector. So this is scalar, so. But you don't have to store this because these are already there in some sense. These are not changing even, right? So only the beta will be changing, where beta is a surrogate for theta in some sense. So you don't have to store how beta changes. And that implicitly tells you how theta will change over time. We'll see exactly how this works. Right, so-- but maybe before going to that, let me just show you why this is true, why this is true. The reason is actually relatively simple if you look at this. So the reason is that pretty much, in short, in a nutshell, the reason is that every time you update, you always update theta by a scalar. This whole thing is a scalar. This parentheses is a scalar, whatever scalar it is. You always update by a scalar times this vector, phi of x. So basically, every time you update, your update is of this kind of form. It's a linear combination of phi of x i. So that's why you keep having this form. If you want to prove that more formally, I think the statement, you have to do the induction. So let me also try to do that. So by induction-- so first of all, you check iteration 0. So at iteration 0, so indeed, our theta is equal to 0. That's my choice of initialization. And this is indeed equals to sum of linear combinations of the feature vectors because I just have to choose my beta i to be 0 exactly, right? That's easy. And maybe I can also, just to build up intuition, let's also look at iteration this step is necessary if you really care about the formal proof. This is just for intuition. And iteration 1, what you do, you update theta to be equal to the old theta, which is here is 0 plus this gradient, which is alpha times i from 1 to n y i minus theta transpose phi of x i phi of x i. And this thing is a scalar. Actually, it's equal to y because theta is 0, right? This theta was 0 in the previous division. So this will be just y i times phi of x i. Theta was 0. And you plug in 0 here. You get this. So this is indeed a linear combination of the feature vectors. So this is a vector. This is a scalar. And so alpha times y i will be my beta i in this one. So this thing plus this thing together will be my-- maybe I'll just write alpha here. If I write alpha here, then this whole thing will be my beta i at iteration 1, OK? And for the future steps, I wouldn't be able to explicitly write beta i that carefully. But I'm going to use an induction, right? So suppose at iteration t, I already know that theta is equal to something like sum of beta i phi of x i. This is my inductive hypothesis. And then at next iteration, you can see that theta is updated to be equal to theta plus alpha-- OK, I'm just going to copy this formula again. Actually, I just want to prove the induction. I don't even have to plug in what theta is because if I just care about the induction-- so I know this is a linear combination of phi of x i. And this is a linear combination of phi of x i. Then the sum of them will be linear combination of phi of x i. But I'm going to do a little bit more detailed steps because the steps will be useful for me as well. So I'm just going to plug in everything. So I'm going to plug in-- I'm going to replace theta by my inductive hypothesis. And then I'm going to replace this theta by my inductive hypothesis as well. So I'm going to have alpha times-- OK, what is this? So theta is equals to-- let me see. One moment. There's something-- OK, I guess let me-- maybe let me just keep this, so. And this is equal to just the sum of theta i plus alpha y i minus the transpose of phi of x i. I didn't do anything really complex. It's just a super simple manipulation. I know this whole thing is a scalar. Whatever scalar it is, this will be my new beta i in the next step. Any questions? [INAUDIBLE] random initialization of theta? Yeah, this doesn't work for random initialization. But you don't need to use-- actually, in initialization 0 is actually probably the best. Maybe there is some reason for this. But maybe let's discuss that offline, yeah. So initialization, it does matter a little bit. But let's say we just take initialization to be 0. OK, so now let's proceed with my plan. My plan was that I'm going to replace-- so far, I just want to prove the claim, right? The claim is that I can always represent theta by beta. And now I'm going to just only mention beta. So basically, my plan is I'm just only going to maintain beta, but not theta. So that means that I'm going to start from p parameters. Before, it was p parameters. And now it becomes n parameters. And if p is very large, then this means some saving. So that means that I have to understand how the beta is changing, right? So I have told you that beta exists, right, so when beta is equal to this. But I want to have an update of beta that only-- so here, right now, beta depends on theta. So if you want to compute beta, the new beta, I have to know the old theta, which kind of defeats the purpose, right? So if you have to know what's the existing theta, then you have to compute it, right? So then you waste p time. So what I'm going to do is I'm going to find out how does beta depend on beta itself without going through theta. So that's what I'm going to do. So where should I? Maybe-- yeah, I'll write on the left-hand side. I guess, maybe I'll just write somewhere here so that there's some locality. Maybe I can erase this. So this is the update for beta i. So how do I update beta i? So beta i is supposed to be equal to beta i plus alpha times this, right? Times this. So this is my rule for my starting point. I know that beta is equal to alpha times y i minus theta transpose phi of x i, phi of x i. But this view is not great because it does have theta here. So I have to maintain theta. I'm going to get rid of theta by plugging the definition of theta in terms of beta. So this is equals to beta i plus alpha times y i minus-- what's the definition of theta? So if you-- sorry, what's the representation of theta? So theta is equal to this. This is the relationship between theta and beta. This is a j because I'm having some insight. And this transpose, times phi of x i and parenthesis. OK, and then I'm going to continue. So we organize this a little bit. So I'm going to get beta i plus alpha times y i minus, I'm going to put this inside, so sum over j from 1 to n beta j phi of xj times phi of x i. This means inner product, right? So a transpose b, I guess this is just a inner product with b. That's what I'm using, so. And this is i. And this is j. OK? So what I have achieved, I got rid of theta. So now it's up the rule from beta to beta itself, right? So if you know the old beta, you can configure new beta. That's what this is saying. But did I really save any time? Not yet, right, because if you want to compute the new beta from the old beta, you still have to do this inner product. This inner product between two p-dimensional vectors still takes p time. And you still have to sum over n indices, so still O of np time. So I haven't really saved much just because the phi still shows up here. However, here is the magic. The magic is that somehow you can compute-- so this is still O of np time. And this is, so far, not really much saving. But there are two things we can notice to make it faster. So one thing is that this inner product can be preprocessed. So you don't have to come here every time because this is just something that doesn't really change over time in your algorithm. By over time, I mean in the-- as your algorithm is running, this quantity is not changing at all for every i, j, right? The beta will be changing. But this wouldn't be changing. And another more important thing is that-- so this means that you can do it once and you don't have to complete it every time. So at least you only have to do it once in the very beginning. And another thing is that oftentimes, this inner product actually can be computed faster than you thought, right? The trivial way to implement this is that you just compute this factor, this factor, and you take the inner product. But actually, you can do some math to speed up this in many cases. So this can be computed without even evaluating phi explicitly. If you have to evaluate each of these factors, you are going to waste p time. But sometimes you can do some math to not even evaluate the phi. That's what I'm going to show next. Any questions? Basically, we are all representing something in the space of Rp in terms of the linear combinations of vector [INAUDIBLE]. So actually, we have less number of factors than the dimensional space itself. So does it make it a lower-end approximation of the initial result [INAUDIBLE]?? Or is that precise? It's still precise because so far, you see that I have never done any approximation, right? But your question definitely makes sense. It's a very legitimate question. So why you can somehow magically save any degree of freedom, right? I think the reason is that-- this is maybe a little bit vague at this level. But the reason is that even you had this p-dimensional degree of freedom, p degree of freedom in theta, when your data is not as large as p, you cannot use all of them. You are not fundamentally using all the dimensionalities or all the degrees of freedom in the theta in some sense. If you have n samples, the maximum degree of freedom you can have is [INAUDIBLE] And that's why you can say even without any approximation, right? So here, it's really to see reimplementation of the same algorithm. We didn't lose any-- it's not like you're doing any approximation. That's the point. [INAUDIBLE] you can only do But this randomly initialize the beta coefficient. When we initialize the beta? Yeah. I think no. I think you have to initialize with beta is 0. All of this only works with 0 initialization. So theta has to be initialized to be 0. And beta also has to be initialized to be 0 just because that's the correspondence at the beginning. So there is a number of examples which is [much] bigger than the [INAUDIBLE] p [INAUDIBLE]?? If [INAUDIBLE],, so then this wouldn't help you. It probably would hurt you, yeah. We will discuss a little bit at the end when this will be useful. I think this will be useful only when p is really, really big. Yeah, but sometimes p is really big like in the case [INAUDIBLE]. But even if beta [INAUDIBLE],, beta still [INAUDIBLE].. Yes, yes. OK, so if your question is, what happens when my beta is random, I think, in most of the cases-- maybe all cases. I don't know. In most of cases, I think you probably would get the same answer eventually when you run this algorithm in a beta space. But the correspondence is only shown with beta is 0. So there's another reason why-- OK, so maybe here is the exact answer to this. So suppose you only care about what happens with beta is initialized to be random. Then I think it still can work. And it can actually still give the same solution in most of cases as beta is 0, initialized to be 0. But that's a different reason. There's a different reason for this to be true because just because you're doing some complex optimization, eventually, it always converges to the same solution, probably that's what you are already on-- you mean as well. But this correspondence between theta and beta, on this level, it only works for beta is 0. This makes sense? OK, any other questions? For the first thing you said about how theta of x i times theta of xj to be preprocessed. We still have to calculate-- we're preprocessing it for every single combination, right? How is that speeding anything up? Right, so that means you only have to do n square pairs, right? So there's still a lot of time. I agree. But the difference is that if you do it on the fly, they have to do this all i, j pair for every iteration. So I'm saying that comparatively, this is faster. I'm not saying, absolutely, it's very fast. So suppose you do this in a pair every time, then you still have to do it for every i and j because here, you have a j here. You have a sum over j. And also, you have to update for every i. So you basically have to do all of these pairs at every iteration. So I'm saying that at least you can save that, at least you don't have to do it for every iteration. So the number of iterations that we want to happen will be higher than the number-- how many iterations will we end up having? The number of iterations, I think, that's a little bit tricky. But let's say you have t iterations, right? So before, we have t times this number. And now you just say that t. That's only thing I'm saying. OK. The biggest saving is probably coming from here, which I'm going to show right now. Each of these inner product is actually cheaper, much cheaper that we typically think. OK, so why that's the case? So of course, this is not a universal statement. It's not like for every phi, you can do this. But for many of the phis that we designed, that make sense, that intuitively make sense, then you can speed up. And actually, later on, this becomes a principle. You only design phis such that this is fast. You don't care about any other phis. But let me say-- let me not get into there. Let me just talk about this particular one. So for this phi, I'm going to show you why this can be fast. And the reason is very simple. It's just can do some math to make your formula easier. So for the phi that I defined here, so phi of x times phi of z-- this, what is this? I'm abstracting a little bit. I have x and z, just the two things, right? You can think of this as x i, this as xj. I'm just using more abstract notation. So this is just the inner product of two vectors. One vector is 1, x1 up to xd, x1 squared, so and so forth. And the other one is a column vector, which is 1, z1, z2 up to zd, and z1 squared, so and so forth, something like this. And you just take the entry-wise product and take the sum, right? So this times this will be 1. These bunch of things times these bunch of things will be sum of x i z i, i from 1 to d, OK? So now let's do the degree 2 parts, the degree 2 monomials. So what are these quantities right here, the degree 2 thing? So they are all of the form x i and xj, right? And here you have this bunch of z's. They're all of the form z i zj, right? And you take the corresponding product and take the sum. So basically, what happens is you loop over all the possible choices of i and j. And you take x i xj times z i zj. And then you do it for the degree 3 part, which is the same. So you loop over. So all the degree 3 part, they're of the form x i xj xk. And on the other side, you have the form z i zj zk. And you take the sum of all possible combinations of i, j, k. So this i is from 1. j is from 1 to d, OK? So now let's simplify this. And so you can simplify this by-- so this one, we don't simplify. This is already pretty simple. Notational-wise, you can simplify x times z. But this doesn't really change anything. It's still the same computation. But then for the second term, what we can do is that we can-- we have a two sum. We can factorize the two sum. So you can write this as-- you first take the sum over i. You look at all the terms that depends on i. That's z, x i, and z i. And then you also have the part that depends on j, so xj, zj. So in some sense, I'm just using a-- abstractly speaking, I'm using the fact that if you have sum of ui times wj, where i is from 1 to d, j is from 1 to d, then this is equals to sum of ui times sum of wj, right? So that's just the form that I'm using. And ui corresponds to x i zi. And wj corresponds to xj zj. That's how I use this abstract formula, so. OK, and then I can do the same thing for the third one, which will be equals to-- again, you factorize based on i, j, k. You collect all the terms that depends on i, which is x i z i. And you collect all the terms that depends on j, which is xj zk. And you collect all the terms that depends on k, which is xk zk. So what's good about this? What's good about this is you can see this one and this one are actually the same thing. You are just changing the way you index the terms. But anyway, you are taking a loop sum over all the terms. But it doesn't matter what indices you use. And the same for all of this-- all of these are all the same. All of these are equals to x transpose x inner product with z. So essentially, what you got is just the x inner product with z plus x inner product with z squared plus x inner product with z cubed. And why this is helping you in competition, the reason is that now it takes O of d time to compute x comma z-- inner product with z, right? That's of d time, but now, p, right? This is just your raw feature, your input dimension. And then after you get this, you can take the power, the second power. This one is just a scalar after you get x inner product z. You just take a scalar square. You get this. And you take a scalar cube. You get this, right? So the whole runtime is really just x, the time for doing this, plus I think, four rows plus some constant because you just take the power. And you take the sum. So the total runtime is-- total time is also O of d. I'm ignoring the constant. I'm assuming d is big. Any questions? So I guess if we choose a phi smartly and appropriately, then we might not even need to compute the whole-- we might not even need to put p to the entire data set. Right, so here you don't need it, right? So here you don't have to implicitly say phi because-- OK, so exactly. That's a good question. So basically, so what happens right now-- so basically, you compute this quantity. But you don't have to know what phi is. You just complete this quantity using this algorithm. And you compute all of these in advance. And then you run this algorithm. So let me write down this more formally. So formally, what you do is you-- let me also introduce some notation to kind of abstractify this because it would be also useful, especially if you read other papers and related works. So there is a notation. People call this-- we find this so-called kernel function is defined to be this, precisely, this quantity we care about-- the inner product between features. This is called kernel function. So this kernel function, as the definition shows, is something that takes in two vectors and outputs a single scalar. And the algorithm is just that-- so if you use the kernel method, basically, we have shown you the steps. But now I'm going to group everything together. So basically, the final algorithm is the following-- and maybe here. So what you do is you say you first compute-- you pre-process all of the inner product. So you compute k of x i, xj. Recall that this is defined to be the inner product of these two things. So you can compute this for all i and j from 1 to n, OK? And then you say you have a loop. So I guess you start with, say, beta 0. And so beta is a vector in Rn. So beta is the collection of beta 1 up to beta n. And so you start with beta 0. And then you do a loop. And this loop will just be something like for every i from 1 to n, I'm going to use this updated rule for beta. So what does that mean? That means that I'm going to update beta i to be beta i plus alpha times y i minus sum of beta j, sum of beta j times this inner product, this inner product, which is something I've computed and I've denoted by k of x i and xj. Right, so that's my algorithm. And now we can take another kind of accounting to see what's the runtime for this particular polynomial feature. So the runtime is-- let's see. OK, so I guess you probably can also get the runtime. So to compute this, we are using a formula like this to compute the part that we're using in this formula. So for every pair, you need O of d time. So this is O of d for i, j, so O of n squared d in total. Right, and here our runtime would be-- let me see. I guess I don't have it on my notes somehow. I don't know why. Anyway, but I can do it on the fly. So you already know this number. So basically, we need to pay n times O of n time to compute this sum. And you also have to update each of this and each of the i. So for every i, you have to pay n times. So totally, this is O of n squared time, n squared time per iteration. So the catch is that there's no p involved anymore whatsoever. So you just only have n and d. And if your n is small enough, then in some cases-- if n squared for example is less than np, then you are winning, right? Recall that before when we run this every iteration, we have to take n times p duration. And now every iteration, we have to take n squared iteration. So if n is less than p, then you are winning. And in many cases, n is much smaller than p. It's because p could be a d to the third, right? That's just very big, yeah, so. At least I'm not saying this is universal, by the way. Actually, I'm only discussing when this kernel method is better, especially-- as you can see, it's better when p is very big and n is small, right? But in those cases, you do save a lot of time. [INAUDIBLE] to do stochastic where you can fully update one beta at a time [INAUDIBLE]? You do update one beta at each time? So I think that would be called coordinate gradient descent instead of stochastic gradient descent, right, because each of the beta i should be considered as a coordinate of your parameter. But that is just terminology. I think if you update each of them one by one, you probably have to use smaller alpha. But in theory, it should still work if you use small enough alpha. But I don't think you would gain much by doing-- maybe you can gain a little bit. But it wouldn't be a fundamental difference. Maybe it would be faster a little bit. Any other questions? So you said that the final runtime was O of n squared. Oh, per iteration. How many interations do I need? That's a little bit hard to decide because it depends on the problem and the-- but so that's why we only compare per iteration so far. Maybe this is just demonstrating a lack of knowledge of runtime on my part. If each calculation the runtime for all function, the total is O of nd squared or n squared d. How is d-- so is the total runtime going to be O of n squared plus O of n squared d times-- Oh, I think I see what you mean. So if you really care about the total, the final, final runtime, I think you would call it something like n squared d plus O of n squared times T, where T is the number of iterations. But what is T? The reason why I don't discuss like that just because T is a little bit tricky. It's a little bit more problem-dependent. So that's why. And yeah, but what's a good guess for T? It really depends on a problem. Sometimes it's pretty small. Sometimes it could be a bit larger. So suppose you ignore the preprocessing time, then you can compare-- roughly speaking, can compare the per-iteration runtime. So maybe for simplicity, so there's no way to exactly compare everything exactly unless you have more specifications of the problem. But roughly, speaking, I think the idea is that here there's no p showing up. So it has to be better. In some sense, it's pretty easy to be better than-- so for example, p is really, really large. I suppose p is like So whatever you do here, it probably is going to be better than something that has p in it. So right. So I think-- and this observation that you only have to compute the inner product of this, of the features, it sometimes has more profound implication. The thing is that when we realize that this inner product is the only thing we care about, then you start to wonder whether you really have to think about what phi is. So maybe you just have to start with the kernel. And then as now as the phi-- so basically, here is a change of mindset that researchers have, in some sense, done for this kernel method. So we started with the phi. And we defined the kernel. But you can also start with the kernel. So you say I'm going to define a function like this. And then as long as there exists a phi, then you are done because you don't have to know what phi is, right? So you only have to know the kernel is an inner product of something. But you really have to care about what phi is. Anyway, it's not implemented. It's not used at all in your algorithm, right? So roughly speaking, I think one way to do this is you just define k and you just forget about phi. Question? So we have n data points. And we have [INAUDIBLE] look into p-dimensional space. This p is [INAUDIBLE]. So this one, we're overfitting this case. So yeah. Yeah, yeah, I think that's a great question. You don't necessarily have to overfit. But this is-- the short answer is that you don't necessarily have to overfit because-- OK, I guess I'm using languages that we haven't discussed in this course. The norm of your solution may not be that big even though the number of parameters is a lot. But you can have a small enough solution. And that still can help you generalize. But this is something that I don't think we are going to discuss in very much detail in this course even though we're going to touch on this a little bit, but not much. OK, so basically, let me continue with this change of mindset in some sense. So what people do is that you just forget about phi. And you just only work with the kernel because all the algorithms, you don't have to even know what phi is. So basically, one thing at least that is tempting to do is that you can just define a k. And then you just run this algorithm because after you define a k, you can run this algorithm, right? But of course, you cannot just use an arbitrary k because if you use an arbitrary k, then your algorithm doesn't have an interpretation. You don't know what you are really doing. It's just the algorithm, right? So what you really want is that you want a k that satisfies this for some phi. But you don't have to know what exactly phi is. So that means that you have to understand in what cases you can define a k and that algorithm still makes sense. So basically, people call this, if k is a valid kernel, if on there exist of a function phi such that k of x comma z is equal to inner product. So basically, you can just design any function k as long as you guarantee that the existence of phi. As long as you guarantee that this is a valid kernel, then you can try to run that algorithm. And you know your algorithm is really just doing a linear model on top of the feature phi. But you don't have to know what phi is. [INAUDIBLE] don't we need phi because once we get these final data as we apply them to phi, that phi version of the data. And then we get the [INAUDIBLE]. That's a fantastic question. That's great, so. Yeah, I think I missed the small part. [CHUCKLES] I forgot. My bad. But yeah, this is exactly what-- let me talk about this. So the test time, you also don't need the phi. That's the thing. So the test time-- by test time, I mean, if you give me a new data point x, so given a new data point x, so the question is how to compute this thing. And the question was that it sounds like you have to have to compute phi and the theta from the beta. And then you compute this. But actually, you don't have to because you can just plug in the formula, so theta transpose phi of x. You use the representation as much as possible and use math. So you try to replace theta by this linear combination, beta i times phi of x i transpose phi of x. And now you regroup. You found that this is still about inner product of two things. So this becomes sum of beta i times the kernel function applied on x i and x. So if your kernel function is evaluated very fast, then you don't have to know what phi is. That's a good question. Thanks for reminding me. Right, so if you know k, you can do the training. You can do the test. And the only thing you have to guarantee is that this k is a valid kernel so that you are doing something sensible. By doing something sensible, it means that if you know it's a valid kernel, you know that you are doing linear regression with the phi as the feature. So in some sense, if you design a good phi, a good feature is kind of almost the same as designing a good k because anyway, you don't know what phi is-- good or not, right? So probably, you should just go directly designing k. And as long as k is a valid kernel, you just run it and see whether it's good or not. Questions? [INAUDIBLE],, so I guess x i is the-- what is x i? Is [INAUDIBLE] samples or training samples or testing samples? This is the training samples. Yes, that's a great question. So x i's are still the training samples. And an x is a test example. So it's interesting that in the test time, you still have to look at a training example to do this test. It's not like you just have to remember the-- [INAUDIBLE] Right, right. So like in a typical case, you just have to store theta. And then you can forget about the training side. But now you cannot forget the training side. Right, so and I guess I have 15 minutes. So I'll briefly discuss when do you know this is a valid kernel because this is-- if you want to use this equivalence and just forget about k, you have to have some way to tell whether your kernel is valid in some sense, so. And there are some mathematical characterization of when the kernel is valid. Let me write down the theorem. Let's see. Which board did I-- which one should I erase? Maybe this one. So there's a necessary and sufficient condition. So a necessary condition-- so if k is valid, this implies that-- OK, maybe let me first define some notation. So in the literature, you can define the following. Suppose you have x1 up to xn. These are n data points. You can define the so-called-- which use the same thing, the same notation. I don't know how people do this, but. So now this is, with abuse of notation, you define a matrix K and call it kernel matrix. And this matrix is a n-by-n-dimensional matrix, where k ij is equals to-- this k is the kernel function applied on this, the pairs of data. I know this is all a bit confusing. I don't know why people keep doing this. On the left-hand side, this is a matrix. You define a matrix based on the function, the kernel function. So every kernel function, you can use that to define a matrix. And this matrix is basically the evaluation of this kernel function of particular data points. I think, probably, we should just call this k another thing, whatever you call it basically. It's just there are two different things. But people seems to use the same notation for it. And what you know is that-- so maybe I'll just probably talk about the full condition given that we don't have a lot of time. So a necessary, sufficient condition is that if the k-- so k is a valid kernel function. So that means I'm talking about a function, but not a matrix. This is equivalent for every x1 and xn if you choose any n examples. And the kernel matrix K-- this is a matrix defined like this-- is PSD, is positive semidefinite. One side of this claim is pretty easy to show because from the very kernel to this, I think it's really pretty much just a simple calculation. So you just plug in the definition that k is the inner product of two feature functions-- feature vectors. And then you can pretty much verify this matrix PSD. The reverse direction is a little bit kind of tricky. And also, my statement of the theorem, I think I'm missing some regularity condition if you really care about the exact math. But up to a small, minor regularity condition, you have to say this K is bounded though by continuous kind of things. Right, so basically, after you have the theorem, then the workflow is that-- so in some sense, the workflow is kind of like you first design a k, a kernel function, right? This is our kernel function k. And then you verify k is valid. And how do you verify? Maybe you can use the theorem above. But that doesn't really mean that it's easy because you still have to use this theorem in some way, try to prove that the kernel mixture is PSD. But there are some ways to verify this. Or you verify either by using a theorem or by constructing the explicit feature map that makes K valid. So you do something like this. And then you just run your algorithm. You run this algorithm. That's how we define somewhere, I guess, here, with k. But here we are using-- so how do we get this algorithm? We're using the regression loss. We're using the square loss and linear model, right? But you can also use other starting point. For example, suppose you start with logistic regression with the feature. And then you can do the same kind of operations like we have done and then arrive at a different algorithm in the kernel space. So in some sense, this is called kernel trick, so basically-- a kernel trick. Or sometimes we'll call this kernelized. It means that you turn an algorithm-- so you turn an algorithm into this algorithm-- algorithm about phi x into algorithm about k, this kernel function. So you may start with logistic regression with phi x as the feature. I didn't tell you how to do it, but you can do the same type of derivations as we have done today to get another algorithm that only uses k. And the algorithm would look something like this, but not exactly the same. It would be different. And if you can do this, then you say, but not all the algorithms can be done in this way. Not all the algorithms is amenable to this so-called kernel trick. So some algorithm is possible. Some algorithm is not possible. I think in the homework, we have this perception algorithm for homework 2 which can be turned into kernel-- can be kernelized. I guess I'll write kernelized. Or you can apply the kernel trick. In logistic regression, I think you can apply kernel trick. But some of the other algorithms, if you use L1 regularization-- if you have heard of it. I know we haven't talked about L1 regularization. But some of the other algorithms cannot be kernelized just because your algorithm-- and also, maybe one thing-- in what case you can kernelize this? So the key is that everything can be written-- the only way you use the features are about the inner product between two data points, the features at two data points. If your algorithm somehow can be turned into a way such that the only thing you care about is this, then you can kernelize it because just replace this by the kernel. But if your algorithm cannot be written in a way such that the entire algorithm only cares about this inner product, then you wouldn't be able to kernelize it. Right. OK, and then let me also briefly say a few words about some of the other kernels, right? So I only showed you one kernel. Some of the other kernels include-- they are actually not that many. Sometimes they are not that many general ones. But sometimes, for particular applications, you can design particular kernels that looks that they are useful for you. So some of the kernels are something like, for example, we have defined this k x, z, which is something like-- for example, this is something that is very similar to what we have. So kx is equal to x inner product with z plus c to the power k. So for any choice of c and k, this is a kernel. And actually, you can write this as something like for k is 2-- I guess, I actually know-- so for k is 2, actually, I even know what's the feature. The feature is something like this-- c, square root 2c x1, square root 2c x2, so and so forth, and x1 squared up to xd squared. You don't have to know exactly what this is. I don't even remember. But for this case, for k is 2, you can write explicitly what the features are. And for other case, you can also write explicitly what they are. But you don't have to care that much about it. You only have to know the existence of it. And then when we run it, you just use this kernel. And another kernel is called Gaussian kernel, which is something like exponential minus x minus z squared over 2 sigma squared. Here this is 2 more. And this one, you can also write it as an inner product between phi of x and phi of z. But this phi will be much more complicated. Actually, you can construct-- sorry. You can construct an explicit phi such that this is true. But this construction will be very complicated. And also, another thing is that this has to-- so phi has to be infinite dimension. So sometimes phi has to be infinite-dimensional vectors. I know we didn't talk about what exactly an infinite-dimensional vector really means. But that's kind of the idea. So you have to really use a lot of dimensions to express this kernel. But you don't really have to know what exactly the phis are because we run the algorithm. You just don't care about this, right? So in some sense, this is the way that we can deal with infinite-dimensional features. So if your features-- it's not a problem. As long as your inner product between two features can be computed efficiently, then the dimensionality of the features just don't matter. It can be very large. Or it could be like infinite. It doesn't really matter. And another thing is that sometimes people think of these features as a similarity metric, where you can think of this function as a way to measure x and z. Of course, that's just, in some sense, interpretation or intuition. So when x and z are similar, I think at least in this case, when x and z are similar, you will get a larger number. So that's why you can think of a similarity metric, so. All right, so maybe a final comment is that, how do you think about this kernel trick in the more than-- by more than, I mean in the last five years or last ten years. So you can see that the runtime here, so they always depends on n squared. Actually, it's always n squared. So you save a lot in terms of the p the dependency. You just even get rid of the p dependency completely. But you what you lose is that you get n squared instead of n. Recall that before, when you have the vanilla way to implement it, you get n times p. The p is there, but the power of n is only 1, right? So but now the power on n is 2, which means that if your n is very large, then this square is actually a bad thing. n squared is much worse than n So you lose a lot in terms of the dependency on n. And this is probably one of the reason why kernel method is not used a lot these days. That's one reason. I don't think it's the fundamental reason. I don't think it's the most important reason. But this is one of the reason. So the reason is that this data in your data set, at least in some applications, n it could be a million. And when you have a million squared, that's 10-- that's a trillion, right? So that's just prohibitive. So that's the problem. But of course, there are also other ways to speed up this a little bit if you really care about the runtime. So the second reason why the kernel method is not used as often as before is that all of these requires a design-- you have to design your k or phi. So whatever you do, it's really a so-called handcrafted feature, right? You do define a function phi yourself. You can choose them. But it's by human design. Or you design k. So at least one way to interpret what the neural network will do is that-- I know we didn't discuss neural networks. But I assume some of you know a little bit. But generally, in high level, at least one way to interpret why neural network is better is that you can say neural network is actually, in some sense, learning this function phi. So as you will see, I think when you have neural networks, in some sense, you can view this, the model, as something like theta transpose phi of x. But this phi of x is parameterized by-- maybe let's call them some parameter w. So you can view the neural network like this. So it's still a linear function on top of some feature. But this feature itself is learnable based on a data. So you have a learnable feature instead of a handcrafted feature. And that's one of the many intuitions why neural networks can do better. But we do see these very often. So these days, we have a lot of this kind of feature extractors that are learned by data and more and more of this. And if you use them, then you just-- it's much, much better than just a random-- much, much better than the kind of polynomial features we designed in this lecture. OK, I guess that's all I want to say for today. Thanks, |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_9_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: OK, welcome back. So last time, I tried to ease us into a switch in thinking. From instead of trying to explicitly solve the Hamilton-Jacobi equation, I wanted to try thinking about a different class of algorithms which we called policy search. Where you explicitly parameterize your control system with some parameters and then just search in the space of parameters for a good solution. So the idea was, let's go ahead and define some class of control systems by some parameter vector. And then, our optimal control problem, which we called sort of minimizing J pi of x0 t. We could think of as explicitly minimizing over that vector alpha. Something that if I shorthand pi of alpha with just alpha of x0 t. And if you're willing to restrict your thinking to a single initial condition, then you can really just think of it as, I've got some old function J of alpha, and I want to minimize it with respect to alpha. That puts you squarely in the land of you can call fmin. In MATLAB, you can do sort of any old optimization on it. So today, I said the lecture is called Trajectory Optimization. I tried to point out that thinking about this policy search, that could be a general thing. That could encompass feedback policies. For instance, this could be parameterized by if K is filled out with alphas, that's OK. And I also said it could be open-loop trajectories, right? In general, I could just ignore x, and it could just be-- let me see how I wrote it last time. It could just be, let's say, that u at time t is just alpha of n where n equals some floor of to over dt. I'm not sure that's the best way to write it, but that's, I think, a clean way to write it. So what does that mean? That means that my control policy over time is just a set of-- it's a zero-order hold trajectory. Where each of these are dt long. And this one is alpha 1, this one is alpha 2, and so on. So if I'm willing to make some sort of simple tape of trajectories that are parameterized by these alphas-- and naturally, you can do a cleaner job of doing this with splines or whatever. But let's think about the simple representation. Then solving this min alpha J alpha is equivalent to trying to find the open-loop trajectory that I'd like to follow which minimizes J, for instance, for some initial, for a particular initial condition. So that class, this sort of the open-loop family of control policies is special enough that there's a lot of methods that are highly tuned for that open-loop trajectory optimization. So I want to talk about a few of them today. They're very powerful. They tend to scale to fairly high-dimensional systems. And I actually think they can be used as a part of a process to design good feedback controllers. But that's a longer story. Let me just tell you today how to solve open-loop trajectories. In the trajectory optimization world, there is roughly-- well, there's lots of ideas. And so many ideas, actually, that it's going to slip into Tuesday I decided. But let me tell you about the first two of them today. I want to talk about first shooting methods and then direct collocation methods. And we have lots of half-baked examples that were coded in the middle of the night, so to hopefully bring the message through. OK, so what's a shooting method? You might be able to guess. Shooting methods are-- how many people know? How many people know what shooting methods are? Excellent, OK. How would you characterize a shooting method? What would you-- AUDIENCE: Can we integrate the states from the book from the beginning time and the [? co ?] states from the end time and hope they match up in the middle or something? RUSS TEDRAKE: Perfect, well, yeah, I mean, in general, actually so it's even simpler than that. In general, so shooting methods are often the title for solving boundary value problems, which is what you just said. The name comes from boundary value problems. We can use them more generally, even if it's just an initial value problem. But the basic idea is exactly what you said. Let's just simulate the system with the parameters we have and then work to change our parameters, shoot at a little bit different place. So let's say we have t over x for some very simple system. And I run my controller from initial conditions here, and I get some trajectory with alpha equals 1, let's say, or even alpha equals some long vector. Maybe it's 1, 2, 43, 6. I get that. And let's say my goal is to get my final conditions to be here. Then, what I'm going to do is, I'm going to change alpha and run it again, shooting successively until I get to my desired final value. If I change alpha, then maybe my controller gets me here. And if I change it again, maybe it'll get me all the way up to the goal. I see you. AUDIENCE: What's the update? RUSS TEDRAKE: Yeah, OK, I'll tell you the update. But the big idea is, I'm going to I'm going to try to solve a problem, for instance, a boundary value problem, by starting with some initial conditions simulating and just changing the parameters. So you can imagine, if the thing you're trying to solve is not-- I mean, a boundary value problem is obviously one thing we can do. But maybe you also have a cost that you're trying to optimize over that. Then the basic idea still holds. So I told you the first way to start thinking about how to do that last time. We can evaluate J of alpha pretty easily. Let me stick with my superscript alpha notation. We can evaluate that with just forward simulation, right? And if you want to know how to change alpha, then it helps to know the gradients-- partial J, partial alpha, evaluated at the current alpha. I told you one way to do that last time, with an adjoint method. Which I still can't resist calling back prop through time, because that's-- I learned it first as a neural networks guy. And there's a second way to do it, which in some cases, is not less efficient. But it's certainly easier to derive and easier to code. So I want to do that to make sure we have the intuition about it too. It's also useful. Again, this is the neural network name for it. There's probably a good name from more standard optimization theory. But in neural networks, people call it real-time recurrent learning, which is RTRL. And this is BPTT. So how do we compute J of alpha? The adjoint method, I told you that if you simulate the system forward in time, you get the sort of forward equation. You figure out J. If you then simulate the adjoint equation backwards in time, I called it y. y dot is some function going backwards. You can interpret y as being the sensitivity of changing-- so let me just write down the form of it quickly. So the adjoint, remember, was x dot equals f of x going forward. Did I actually right down the right equation for you going backwards? And then, negative y dot is f of x y minus G of x T. Where those are the big gradient matrices I wrote down last time, going backwards. And then, the update gives you, that partial J partial alpha is just a simple integral now. So this backwards equation, which happens to be the adjoint equation we saw in Pontryagin's minimum principle. y has an interpretation as the sensitivity of the cost on changing x-- y at some time T, y at time 3. It's the same size, the same dimension as x-- this y variable. It's a column vector just like x. It has the interpretation, it's the sensitivity of J on changing x at that time. So you compute forward, then you compute back the gradient of J with respect to x of t. And then knowing that, you can simply compute the full gradient with respect to the alpha. And that took a little bit of derivation through the Lagrange multipliers. RTRL is actually even simpler. Is it OK if that wall disappears for a minute? We wrote that last time, right? I'm just going to go to my third wall here. This time, I'm only going to simulate forward in time, which is why it's called real-time recurrent learning. Because some people don't like having to simulate forward to capital T and then simulate all the way back to make an update. That's sort of, we have to go all the way to the end of time in order to make your update at time 2. It's more appealing if you can make your update at time 2 by just thinking about time 0 to 2. So you can do it in a forward pass only. The name came from maybe that, I think, at one point in time, someone maybe thought maybe this is what the brain is doing. Because people thought the brain can't be doing these backward passes efficiently. So maybe real-time recurring learning is what the brain is doing. But I don't think people really think that anymore. There was one paper that thought that, which is a nice paper, but-- So let's do, let me just show you RTRL, because it turns out to be maybe even the more simple thing. So I have J of alpha starting from x0, 0 is the integral from T. Oops, I did that again. Let me come up with a new working variable. The previous one y was useful coming backwards. If we're going to go forward, let me define a matrix, P, where the ij-th element is a partial xi, partial alpha i. If I have that and I want to take gradients with respect to this, then I can do it pretty easily actually. If I want to do partial J partial alpha, I can go inside the integral. It's just dt. I'm just using my chain rule here. I get partial G, partial x, partial x, partial alpha. These are x at some time T, plus partial G partial U at time T, partial pi, partial alpha. And in general, if I have a feedback term in my policy, I also got to worry about partial G, partial U, partial pi, partial x, partial x, partial alpha. It's just a chain rule derivative of this. Do you agree? It turns out, using the matrices we had used before, I can write that very simply as integral from 0 to T of Gx, the big derivative of x times P, this matrix here, plus the big derivative of G on alpha. If we know partial x of t partial alpha, then it's easy. So how do we get partial x T partial alpha? Well, that's easy too. Let's look at the forward equation here, xu which is, in this case, pi alpha x of T. So let's look at how this changes with alpha. Let's take the derivative with respect to alpha. I'll write it even more cleanly-- partial f, partial x plus partial f, partial U, partial pi, partial x times P, which is partial x, partial alpha. Let me write it as-- Everybody agree with that chain rule? No? OK, ask me. Yeah. AUDIENCE: So there you have G with respect to x and then P [INAUDIBLE] And then we have G of alpha which would be the second variable. What happened in the third one? RUSS TEDRAKE: G of x is actually partial G partial x plus partial U-- G partial U, partial pi, partial x. AUDIENCE: Oh, I see. RUSS TEDRAKE: That's the big gradient with respect to x. AUDIENCE: And that I on the bottom is J. Is that correct? [INAUDIBLE] RUSS TEDRAKE: This is now the matrix P. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Oh, thank you. That should be a J on the bottom. Good, thank you. Please do catch me on those things. Yeah, thank you. So then, are we happy with this? So this is pretty simple too. Just now, I get an equation forward in the gradients P dot is my big f of x. Which I get is the direct and the indirect gradient with respect to x times P plus the gradient of F with respect to alpha. And that's it. I'm almost waiting for more to do, right? So if you're willing to go-- if you want to go forward in time, then if you're willing to keep around this extra term, partial x partial alpha, then as you move forward in time, you can build up your gradient of partial J partial alpha. And by the time you get to the end of time, you don't have to go backwards again. You know what the total derivative is. So why would you use this versus the other method? Does everybody agree with that? I don't see big smiles, but this is satisfying and simple-- smile. The cost of this is carrying around this matrix. So potentially, that could be big. If you have a lot of parameters, let's say I have 100 parameters in my control system, or 10,000 parameters in my control system, then you're actually integrating forward a matrix equation that could be pretty big. It's something that's x-- the dimension of x by the number of parameters. So that's one-- that's really the only problem with it. The back prop through time is only carrying around this y, which is the size of x. But to do this sort of nice forward-backward update, you'd better be able to remember the trajectory that x has taken over time. So for very long trajectories, it might not be much more efficient to do this. It's just a trade-off. Some problems are actually quite nicely done in RTRL. Some are nicely done with back prop through time. The back prop through time is, the reason it's so beautiful and clean is that-- so remembering your goal here is to compute a vector, partial J, which J is a scalar with respect to a vector of parameters. Here you have to carry around a matrix to do that, partial x forward. The back prop through time takes advantage of the fact that at the end of time, everything collapses to a scalar again. And that's why it only has to sort of carry backwards. If you willing to go all the way to the end of time, you can remember only the effect of that scalar value, j, with respect to the x's. So that's why you're allowed to carry around less of this. But it involves going forward and backwards. Is that intuitive? Yeah? What can I say more? There's various other reasons why you might prefer one or the other. So for instance, let's say you have a final-- a boundary condition. Let's say you want to solve this constraint, minimize this function subject to the constraint that x of capital T is my goal. You can write that constraint into either of them. I find it more natural to write it into this RTRL form. Just because you know explicitly partial x, partial alpha. So if you want to compute at the end your constraint derivatives, saying what should I have done? How should I have changed alpha in order to enforce that constraint? Then it's actually a simple function of P. So maybe I'm not giving you a silver bullet by telling you one way or the other. But depending on exactly the way you want to use them, sometimes one is more useful than the other. So in the simulations I'm about to show you, I'm going to actually RTRL, just because it's also, this is trivial to code. But the big idea here is really just that I'm doing a shooting method. I'm simulating this thing forward. And I'm trying to compute what's the change. And that's my scalar cost with respect to a change in my parameters. So that I can then update my parameters. And last time, we talked about multiple ways to do that. You might do that with a simple gradient descent algorithm. You could do gradient descent. You could say that alpha at the n plus 1 step is just alpha n minus some learning rate, eta, times partial J partial alpha. And I argued last time that you can do better than that by sequential quadratic programming. To the point where SQP methods are so robust that you just find something like SNOPT, you download it, and you use it. And I'll try to convince you as we run some of these things that apart from installing a package, which takes a few minutes, once you do it, you'll never go back to this. I'll tell you one reason right off the bat. Choosing the learning rate is a pain in the butt. You never have to do it again. So? Let's see some examples. Let me do the pendulum with SNOPT. So using SNOPT is just, all you do is you tell it-- you give it a function which can compute J and J, the derivatives of J. It's SNOPT stood for sparse nonlinear optimal-- or optimization. And in a lot of problems, many elements of this vector, partial J partial alpha, are 0. We're going to see that in our direct collocation methods. A lot of those gradients are 0. It happens in the one I just told you. This is typically not-- typically all of those elements of that vector are non-zero, which means I'm not actually getting an advantage by using the sparse. NPSOL is the non-sparse version. But I don't think it's much worse, if any worse. Just for those of you that are trying to figure out which package you want to convince your PI to buy. I think SNOPT is normally pretty good. And there's a student version that we can have you download that'll do small problems. So let's do the pendulum. My cost function here, is just an LQR cost. I have Q as a-- I have J as the integral from 0 to capital T of x minus the x goal. I had to be careful about wrapping. But transpose Q x minus x goal plus u transpose Ru. The whole thing, dt, right? And I actually also have a final value cost where Q is 10, 1. Qf is 100 times that or something. Q and R is 100. So it's going to reward the thing for getting to the top. Let's see what it does. I'm going to plot the trajectory for every step of the optimization. So it's not quite doing simple gradient descent. It's now going to do a sequential quadratic programming update where it estimates the quadratic bowl and tries to jump right to the minimum of the bowl. So it's going to be a little bit more jumpy, as you see it, but it's fast. It's finding the last optimization and then boop, right to the top. AUDIENCE: I have a question. So how is this trajectory parameterized? Is it just openly. RUSS TEDRAKE: It's exactly this, the floor of T over dt. My sloppy notation was because it's MATLAB notation. AUDIENCE: So when they say [INAUDIBLE].. RUSS TEDRAKE: That's the open-loop policy, which had U at time T, was just, I do a floor, and I go to the n-th index in alpha. So alpha 1 is my control action from 0 to dt. AUDIENCE: So how many parameters is this case? [INAUDIBLE] RUSS TEDRAKE: Good. So I did two seconds covered by 40 bins. I bet if I do 20 bins, it's OK. It may be a little less faithful to the-- less smooth. But it works out. So please realize that like 60% of that time was just drawing those plots to the screen, easily, right? Especially when it's reshaping the screen and stuff like that. That's wasted time. So hey, we could do the same thing on the [? karpal. ?] AUDIENCE: Can we do this without [INAUDIBLE]?? RUSS TEDRAKE: Good question. Yes. So yes, it's quite simple. In direct collocation, it's really simple. So actually, my code does it for direct collocation and doesn't do it for the gradient. But it's actually quite fine. You just have to have T as one of your parameters tucked into alpha and be able to compute partial J, partial T, which is not very hard actually. You just have to figure out how this function changes when you take a derivative with respect to T. And it's just, it's like an x dot times the quantity J at T. It's not too bad. If you can take that gradient, you can optimize with respect to it. So what about for the [? card pull? ?] Well, oh I forgot, I took off the zooming. So I gave it a fixed axis. It's going to make a liar out of me. This is the slowest I've seen. There it is. Let me do that again. It's certainly more impressive than that. It's starting from random initial tapes, by the way. And every time I've run it for the [? card pull, ?] it comes up with the same solution. Come on. There we go. All right, not quite as impressive as it was in the lab, but that's still pretty good. If I turn off drawing, it bet it's a lot faster. So I turned off drawing, there you go. I think my computer is slower on battery power too. That's probably-- I'm disappointed. Oh well, it's still pretty fast. AUDIENCE: So I was just wondering, if you simulated longer, would it stay at the top, or would it fall in? RUSS TEDRAKE: So I actually put a final value constraint in on that. So it actually gets to 0, 0. So because I'm simulating, it probably would stay up, right? But I think the natural thing to do would be to draw an LQR controller in at the top for instance. And in fact, we're going to talk on Tuesday about how to LQR stabilize that whole trajectory. Because for the most part, I think open-loop is just the first piece of the-- just is one of the tools. Good. So what happens if we did-- you can do the acrobot too. I'll do the acrobot in a second here. But I also have the sort of simple gradient [? descent ?] version in here. That if I just did my alpha equals negative eta times dJ d alpha in there, let's see how that does. Was that faster? Let's do it. Let me try that again. That's not running. I didn't save it. I knew that was too good to be true. OK, my ears are too big. I probably have to change my learning rate. Thereby confirming my complaint that-- OK, so it works. But never do it again, you don't need to. Just download SNOPT. It'll get there eventually. You can see the errors going down. And if I set my learning rate properly, it'll go down pretty fast. But not fast enough for me to be patient. And then, here's the SNOPT version again. Now the pendulum is very fast. Good. Let me do direct collocation before we get too mystified by the simulations. There's another idea. So shooting methods are certainly subject to local minima. And I've got an example that I'll show you in a few minutes that I hope will make that clear why they can be subject to local minima. Something that people do, they do sometimes multiple shooting methods. Sometimes it's sort of, there's even numerical sensitivity sometimes, sort of integrating this thing for such a long time. So a lot of times, people will define some breakpoint in the middle of there, some artificial breakpoint in the middle of their trajectory. And say, I'm going to optimize, first try to get me to this point. And then I'll say if I simulated from some other point to here and then use as a constraint in their optimization to try to make that residual go to 0, if that makes sense. I'm just not going to talk about multiple shooting. But just know that there's a version that people use often, which are the multiple shooting methods. To some extent, direct collocation is maybe the extreme of that. I told you there's a lot of good reasons to use SNOPT, or some SQP. So first of all, why use SNOPT? No learning rate tweaking, that's a big one for me. It's often faster convergence. Because you are doing big steps. You can sometimes jump over small local minima. But there's a big one in there that's not on the list yet. What's the big one I'm going to say? What's perhaps the best reason, I think, to use SNOPT? AUDIENCE: Fewer constraints? RUSS TEDRAKE: Good. It's easy to add constraints. Because the way these sequential programs are solved, there with these interior point methods and things like this. And they're very efficient at handling constraints. So in my pendulum swing up on the simple gradient descent, I didn't actually have a final constraint on getting to the top in the SNOPT version. It's just trivial to add that. I can put bounds on my actions very easily. I could say, do gradient descent, but never let U at time T be bigger than 5. I can even put constraints on the trajectory if I wanted to. So because of the power of nonlinear optimization to handle constraints, people came up with a different way to hand the optimal control problem to an SQP method that exploits those constraint-solving abilities a little bit more explicitly. And that's the direct collocation methods. AUDIENCE: Can I ask a couple of questions? When we were talking about this previous case that you showed, isn't it just providing the [INAUDIBLE] function, providing the Q's and R's is sufficient to actually get [INAUDIBLE].. So if we add an R constraint on top of it, it would be [INAUDIBLE] on top and sort of like a 2 with respect to our method? Because the goal we have essentially is to maximize or minimize the [INAUDIBLE] over the trajectory. We can put some content, like it would be more information. Like for example, I wanted to reach this state or that state for sure. We [INAUDIBLE]. RUSS TEDRAKE: So I think that's a very RL way to think about it-- not cheating. I mean, cheat, cheating is good. If you can hand more information to your algorithm, do it. Don't worry about cheating. But no, I agree. So the question was, is it fair to give it a final value constraint, or am I comparing apples and oranges, roughly, right? Like if I say one is not using the final value constraint and the other one is. So in my opinion, the goal is actually to get there at time T. The optimal control program I'd like to solve is minimize some, even minimizing just u transpose Ru dt subject to x of t is my goal. That might be my favorite way to write it down. And then, just the question is, what methods can I use to solve that? So the opposite view of the world here maybe is that because a lot of the methods don't handle these constraints explicitly, I'm stuck writing down a cost function, which is x transpose Qx plus u transpose Ru, even if that's not explicitly what I want. Or maybe I should say, especially the closest analogy is if I have a final value cost, straight and maybe I make Qf really big. The only question is what you really want to do. In most cases, I really want to do that. So I'm quite happy to use solvers which could do either case. I think a more powerful solver is one that can handle either case. AUDIENCE: The other question is, if we want to solve something which takes a lot of time, like T is relatively big, it seems that if this open-loop policy thing that you're following has one parameter per time step. RUSS TEDRAKE: It could have a lot of parameters. Good. AUDIENCE: But is it possible to just describe the whole policy with a very limited space with very few parameters and solve for that? RUSS TEDRAKE: So I tried to be careful to write down the equations. So the question was-- can people hear the question or not? The question was roughly that if we're worried about a problem with a very long horizon time, it seems that I might have to have a very large list of parameters to cover-- to make a tape that's that long. And so, aren't the algorithms rather inefficient there? Couldn't I do better by writing down maybe a feedback policy? That's often the case is that feedback policies can be more compact. So I tried to write down all these equations as if there was some dependence on x in your policy too. So the equations will be the same if you do that version. The only thing that I don't handle nicely in the things I'm throwing up on the board is that I'm always simulating from the same x0. So I'm really explicitly optimizing, even if I optimize a feedback controller from a single initial condition, its performance from a single initial condition. So you can quite easily say make a different cost function, which is, let's say I want J to be the sum of my performance from initial conditions K through 100. And this would be, let's say, J of xk, 0, or something like this. Maybe I could start it from 100 of my favorite initial conditions. And that would try to optimize a feedback policy perhaps better. But the only thing that's not nicely addressed, I think, is choosing your initial conditions in a nice way. DP-- [INAUDIBLE] program handles that beautifully. And these things are much more local methods. So they have to be in there. But I absolutely agree. Oftentimes, the open-loop tapes are not a particularly sparse way to parameterize a policy. Good. So the direct collocation methods, like I said, more explicitly-- they're even more in the sense of open-loop policies versus feedback policies, actually. But they also more explicitly add these constraints. So here's the idea. Let's make my alpha my vector that I'm trying to optimize over. A list of, can I call it u0, u1, u2-- is that another reasonable way to describe what I've already done-- to u capital N. I've got a list of control actions. But now I'm going to actually also, I'm going to augment my parameter vector with the state vector. So I'm just going to make an even bigger parameter vector. One of the reasons to do that is, I can now evaluate J of alpha, which is this 0 to T, G of x. Maybe I even approximated this discretization, right? So it could have a dt in there or not, it doesn't matter to me. If I have u and x and all these things directly in my parameter vector, I don't actually need to simulate in order to evaluate that. I can just evaluate it immediately. I have x and I have u. The only problem is that how do I pick alpha so that x and u are consistent? It better be the case that x1 looks like the integration of x0 with u0 applied had better get me to x1. So instead of having that sort of implicit in my equations, let's make it an explicit constraint. So let's do this subject to the constraint that x of N plus 1-- actually, lots of constraints, so it's a list of constraints. x1 had better be f of x0 u0 times dt, let's say. It had better be equal to 0, and so on, then x2-- right? So if I'm willing to add representation here, I can actually evaluate my cost function without explicitly simulating. That's cool. I can take gradients very quickly, because now it's just explicitly the gradients, partial G, partial x. Well, I know x, right? AUDIENCE: Wouldn't you also want x1 minus f of x0 u0 dt minus x0? [INAUDIBLE] RUSS TEDRAKE: Yes, thank you. Thank you. So plus-- thank you. Good, thank you. I had some weird mix of discrete time and continuous time floating around there. So if you parameterize it like this, you can very efficiently calculate the gradients. For instance, whereas you can easily calculate the gradient with respect to time, in this parameterization, and you just add a lot of constraints to your solver. And you're asking your solver to solve for a lot more points. It turns out that these solvers handle constraints very efficiently. In fact, it's often times more efficient to add constraints to the system, because it reduces the search space. So this is actually quite fast. The only criticism of it-- then there's another thing nice thing about it is that you can-- I'll show you what I mean by this. But you can sort of initialize this in ways that hop out of other local minima. So let's say I just choose my initial conditions in the previous simulations. I just always just chose u0 to [? uN ?] to be some small random number. So the pendulum in the first thing would just shake here a little bit. And then it quickly changed until it swung up to the top. I can pick x perhaps more intelligently. It won't have satisfied the constraints in the initial case. But I can choose an x. For the pendulum, let's say my initial guess at x would be a direct trajectory that goes straight up to the top. If I start searching now for alphas that minimize this cost and satisfy those constraints, it just puts me in a different sort of area of the search space. And it might actually help me find the swing of policies. It's a very sort of heuristic thing to say. But it makes a big difference in practice I find. So I think most people today actually use these direct collocation methods for trajectory optimization. Yeah, Rick? AUDIENCE: [INAUDIBLE] change those parameters the vector would change. RUSS TEDRAKE: So you're worried that this will change-- the constraint matrix would change? AUDIENCE: Well, aside of the alpha [INAUDIBLE] RUSS TEDRAKE: So I don't do it that way in my code. I do it, I say that this is valid from 0 to T over N, let's say. And this one is used from T over N to 2T over N and so on. So it just stretches out. So the dt is not constant. I use that same action for longer if my T stretches out. And that keeps the parameter vector constant. But I think if you were to purchase a DIRCOL-- this is often shorthand as DIRCOL, direct collocation, DIRCOL. And there was a DIRCOL package that you could get in FORTRAN 10 years ago or something that I think a lot of people used. And I think those do things like it stretches out time. And then if dt gets ridiculous, it adds some more points. And then a more polished software package would do these things like adding parameters and then reinitialize-- reseeding the optimization. Mine doesn't. Should I show you how it works? Is that the best thing to do here? It's a pretty simple idea. The part that I can't really express to you efficiently here is why that this is something that the solvers can do very well. I can tell you that it's about how SQPs do interior point methods, but I don't want to get into that. So I think if you just sort of take it on faith that they're good at handling constraints, I think it's reasonable to think that maybe sending in an over-defined trajectory and allowing it to sort out the constraints is a reasonable thing to try. And in practice, let me tell you that it's pretty fast. So can I do the pendulum DIRCOL now? So that one was fast before. Now you notice the time horizon here-- 3.06? Yeah, so that's what it wanted to be doing. It liked 3.06 better than two. So maybe my other ones would work better if I put 3 in there. Can we do the-- someone called for the acrobot before, DIRCOL on the acrobot. Oh, I should turn off plotting. Sorry. Let's see what happens. It's also got this final value constraint. That's why it quickly got to the goal to satisfy that constraint. And now it's making sure that all the dynamics that these trajectories are satisfied. And then, it's just optimizing within that constraint manifold. I didn't really mean to-- I'm afraid to hit Control-C. It might crash. That's the one thing about [? SNAP ?] being a [? MEX ?] package calling FORTRAN-- stop. now my GUI is gone and my code is gone. Great. OK, good. And it got to the top. No, that's the pendulum, sorry. That's cheating. So turning off printing, just run DIRCOL for a second here. AUDIENCE: How would using these methods can be extended to stochastic case? RUSS TEDRAKE: I think they're heavily tuned to the deterministic case. I won't make a blanket statement saying they can't be extended. But even the feedback-- I really think the direct collocation in particular feels very specialized for opening the trajectory optimization. This is slower than I remember. It gets there. Yeah, John. AUDIENCE: Is there a reason why you need to use [INAUDIBLE] vector and not feedback [INAUDIBLE].. You could evaluate the u given the x and the [INAUDIBLE].. RUSS TEDRAKE: I think you could do that. I think the key thing is parameterizing the policy as well as x. But I think you're right. If you did some handful of w's or alphas there-- sorry-- then I think you could probably do that too, yeah. I mean, implementing this constraint is the only real criticism that people have of collocation methods. Almost everybody uses them, it seems. The only criticism they have is that they have sort of this fixed-step integration in here. The constraint being satisfied-- it's hard to do sort of an ODE solver in that step, because you need to be able to take the gradients of your constraint. So they tend to be fixed-step integration routines, roughly. And so, the accuracy-- if people point to a problem with collocation methods, they uniformly say that they're not as accurate as the shooting methods, because you don't actually numerically simulate your system carefully. You've picked some time discretization of the system, and you get it right for that. But I don't think that's a big deal actually. And if they're fast and they get out of local minima, then worst case, solve it this way and then do a little shooting method at the end to finish the optimization, if you like. Run it one more time. John and I were talking before class that there's really no reason why you couldn't compute the gradients of an ODE, update inside here. Well, how would you do that? You do it exactly like we did the shooting method, right? You could sort of run a little-- to compute the gradients of your constraint, you could actually do the adjoint method or the RTRL method to compute those gradients. And then, that puts you somewhere in the land between direct collocation and multiple shooting. I swear that my laptop must be operating on half a brain right now, because this was much faster in lab. AUDIENCE: Do you have a power cord? RUSS TEDRAKE: I'm not connected into power. AUDIENCE: If you go to the battery, I guess it switches to energy saver. RUSS TEDRAKE: Yeah, you're right. I could probably turn it off right now. So that was a slightly different trajectory. That's just graphics though. I'd be exceptionally embarrassed if I plugged it in or changed the power settings and it was still slow. So let me just say that it was faster in live and we'll leave it like that. So let me just make the point of, do people understand how a problem like this-- we sort of saw a demonstration that, if you could remember in your head what the acrobot did the first time and the acrobot did the second time. They both got to the goal pretty well. They took slightly different trajectories, at least to my eye. So one of the complaints about any of these trajectory optimization methods is, they're only local methods. We're only going to find a local optima in my optimization. For me, I think about these problems a lot. It's still not completely intuitive what you think of a local minima in these settings might be. There are some places where it could be pretty intuitive. So the pendulum, you can imagine if I found one policy that pumped up with one pump, you could imagine it might be hard to sort of get over a cost landscape, so you did two pumps to get up. That could be a case where a local minima makes sense. I tried to come up with a little bit more obvious of an example here relating to the original grid world stuff we did. So now, this isn't so much a dynamics problem. But I thought maybe a path-planning problem would make the point. So here's a random geometric landscape with Gaussian bumps that you try to avoid or you incur cost. I'm actually plotting the cost landscape as a function of x. So you see there's a small hill, which is trying to take me down to the goal in red. Can I turn the lights all the way down for a minute here? It'll be dramatic this way. So there's a goal here in red. And let's say the initial conditions are over there in green. And your task is to take the system, which is x dot equals u, where x is the xy position of this, and u is the velocity in x, the velocity in y. So just a trivial dynamical system, but you're trying to find a path that gets you to the goal with minimal cost. So I did this example just because some people care about this kind of example. But also because I think it's sort of critically obvious how you can have local minima. If I get a trajectory that's on one side of the mountain and maybe the globally optimal trajectory is on the other side of the mountain, it might be hard for me to get across the mountain to that trajectory. So let's just see that happen. So do direct collocation here, so what did I implement? OK, so I forgot to type-- so I just did direct collocation. And really quickly, it found this. So this is not the optimization software's fault. What if I do this? So it found some nice path through the foothills here to the goal. Yeah, please. AUDIENCE: Can you try to specify some of those-- since you have x parameterized, can you try to make it go through a particular set of bumps by specifying that? RUSS TEDRAKE: Exactly. So the reason I chose to do direct collocation for this is, my initial guess, u was just some random vector. But my x was actually a direct line from start to the goal. AUDIENCE: So if you had drawn it as a line between those first two and then going around to [INAUDIBLE] RUSS TEDRAKE: So the one I thought to do was, let's just do-- just because it was easy to type-- let's do an initial x which just goes directly this way and then see what happens. I think that's what I had here. So if I change my x tape here is now linspace. So it interpolates in x0 straight to the goal. But then the other one is just x1 straight across, reading that code is not what you want to do in class here. But if I set the initial x tape to be that and I run it again, It's doing its solving. It's properly doing it in some window. Oh, I turned off animation, didn't I? Oops, that was a failure. But it found a different path. But it actually told me, Warning, exited. So I have this check page 19 of [INAUDIBLE] 6 the paper to figure out what the heck exit 41 means. But it basically couldn't satisfy the constraints. So I bet if I just run it again with the random initial conditions, it'll be OK. But the point is exactly this. I still found one that just probably didn't satisfy some of the constraints at some small part of that trajectory here. It went the other way around the mountain. So there are local minima in these problems. In problems like this, it's completely obvious why there are local minima. In the acrobot and things like that, you will find that there are local minima. If you start with different random parameterizations, you'll find slightly different swing-up trajectories. So our local minima-- a killer-- a lot of people say, well, that means these methods stink. They're subject to local minima. I don't care if I'm in a local minima for the most part. I mean, if I was really going to have to walk around a mountain instead of walking through the mountains, then maybe I'd care. But if I'm doing an acrobot and it swings up like this instead of swings up like this, for the most part, I don't care. So although people talk about it a lot, I find in most of the problems I care about, local minima aren't that big of a deal. They exist. Sometimes they can upset your numerics. But as long as you get to the goal, I'm happy. Now, there are a couple other ideas in these trajectory optimizations. And on Tuesday, I guess, I'm going to wait till Tuesday now, first of all, I'm going to tell you how to stabilize these trajectories. Because that's useless as it is right now. If I just even simulated it with a different dt, it would probably fall or not get to the mountain. But it turns out, even with a pretty coarse time step, if you stabilize the thing with feedback, then it works great. So I'll show you how to do the trajectory stabilization with an LQR method on Tuesday. And that's actually going to lead to another class of the trajectory optimizers, which would be an iterative LQR method. And then, depending on how much I sleep this weekend, I was thinking about doing the discrete mechanics version of the trajectory optimizers on Tuesday too. We'll see. So we'll push the walking back until Thursday just to complete the story about these trajectories solvers. Any Questions? They're pretty good. They're pretty fast, especially if you are plugged into the wall. OK, see you see you next week. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_3_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: OK, so last time we talked about non-linear dynamics. We drew phase plots. We talked about basins of attraction. We talked about fixed points. And we hinted at control, and I tried to motivate control not as some nice matrix manipulation of equations, but actually by thinking about phase plots and saying, you're going to move that phase plot a little bit. You're going to reshape it in order to bend the system to your will right-- but just a little bend. You're only allowed a little building in this class. OK, so today we're going to make good on that idea, but we're going to do it on an even simpler system first, just today. We're going to do it on the double integrator, which is q double dot equals u-- because here I can do everything analytically on the board. If you want a physical interpretation of that-- which I always like-- you can think of this as a brick of unit mass on ice, where you provide as a control input a force, like this. [INAUDIBLE] force equals u, and there's no friction, and mass equals 1. What we're going to try to do with this double integrator is roughly, we're going to try to drive it to some-- to the origin. We're going to try to drive it to zero position-- I guess that's negative x in this picture-- and with 0 velocity. It turns out there's lots of ways to do that. And the goal here is to make you think about ways to do that that involve invoking optimality, because that's going to be our computational crutch for the rest of the term. OK. I've been trying to bring the tools from the different disciplines all together. So let me start by doing just a quick pole placement analysis, for those of you that don't think about poles and linear systems that much. So if I want to write the-- a state space form this equation-- again, I've always tried to use q just to be my coordinates, and I'll use x to be my state vector. So a state space form of this is going to use vector x to be, in this case, q and q dot. And that dynamics there is the simplest state space form you're going to see, but a state space linear equation will have the form Ax plus Bu. In our case, it's going to be the trivial 0,1' 0, 0; and x plus 0, 1 times u. OK, it's not going to get easier than that, but we're going to use that form, because that's going to help. OK, our goal now is to design u. We want to come up with a control action u-- which you can think of as being a force on the brick, let's say-- which drives the system to 0. So in general, our goal is to design some feedback law-- I use pi for my control policies-- which is a function of x. Let's start by doing the linear thing. Let's start with considering [INAUDIBLE] of the form of negative kx, where k is a matrix. Well, actually, what is k in this case? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: 1 by 2, right? So it's going to be k1, k2 times x, which is my q, q dot-- equivalent of saying negative k1q minus k2 q dot. So many of you will recognize this as a proportional derivative controller form. OK, so if I take this u equals negative x and I start thinking about what that-- if I change k, what happens to my control system? That's easy to do in linear systems. So if I stick that gain matrix in, then what I get is a closed loop system, which is A minus-- sorry-- minus Bk x, which is just the system 0, 1; negative k1, negative k2; x. OK, and if you've had a class on differential equations, you know how to solve that. The solution uses the eigenvalues of the system. You can quickly take the eigenvalues of that matrix. Characteristic equation out to be k squared minus 4k1 over 2, with eigenvectors-- v1 is this, v2 is this. That's just the eigenvalues and eigenvectors of this matrix. So what are the conditions on the eigenvalues to make sure the system's stable? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: [INAUDIBLE] both negative. Potentially, we care about whether the system has any oscillations or not, which manifest themselves and whether that's-- whether the thing's complex-- [INAUDIBLE] complex eigenvalues. This is all things you've seen in plenty of classes, but the only way it's going to be complex is if this thing goes negative, right? OK. So we want a couple of things. We want both of them to be-- both of these to be less than 0, which we can get pretty easily. And we want k2 squared to be bigger than 4k1. k2 squared is bigger than 4k1-- then the system is actually overdamped. If it equals 4k1, it's critically damped. And it's underdamped if it's less than 4k1. For stability, we want lambda 1 and 2 to be less than 0 for stability. OK, so just to connect this to the phase plots we were talking about yesterday, you've seen-- you might have seen phase plots first in this context, in the linear systems. context . The reason to do this eigenvalue decomposition is that you have these beautiful graphical interpretations of the solutions to the system. Let's choose a particular case. What did I pick? So let's say k1 is 1. That means I'm going to want to think about an overdamped system. I want k2 to be at least greater than 2. So I'm going to choose k2 equals 4. If I do that, then my eigenvalues work out to be negative 4 plus or minus 16 minus 4 over 2, which is negative 2 plus or minus the square root of 3. Square root of 3 is about 1.75. So I get negative 0.25 and negative 3.75 for my eigenvalues. OK. And the eigenvectors are going to be just this form. So what that allows me to do is make my same state space plots we were making yesterday where now I have q and q dot. And my first eigenvector is going to be 1, negative 0.25. I'll use these as quarters. So I go minus 0.25 [INAUDIBLE] 1 here, so I get a line that looks like this. That's v1. And I get a line that goes almost 1, negative 4-- so a little bit under that here. v2 is over here. OK. And the eigenvalues on this-- if you've seen these plots before, we typically [INAUDIBLE] They're both negative, so we're going to draw an arrow like this. Systems-- initial conditions that start on this will just get smaller. Initial conditions that start on this will also get smaller, but they can actually get smaller a lot quicker. Just to say a few of the subtleties, so I an overdamp system so I don't have repeated eigenvalues. I chose a overdamped system so I don't have oscillations, because I can make the same plots. But for the overdamped system, this is a great way to think about things. When I don't have repeated eigenvalues, the-- any initial condition of the system can be written as a linear combination. That means that, when the system doesn't have repeated eigenvalues, the eigenvectors span the space. Any initial conditions can be written as a linear combination of the two eigenvalues. And the dynamics from this point are just the exponential dynamics on the-- of the two components. I don't know. Tell me if you want me to say that again in a more-- it's not really the focus, but if you understand that, you got the whole thing here. So what it means is you could take any initial condition-- it turns out, any initial condition when I have eigen vectors which span the space. I'm guaranteed to have that, if I have unique eigenvalues. Then I can write this as a combination of these vectors. I've got one component like this and one component like this. This initial condition-- if I say this is x0-- so I can say x0 is alpha 1 times v1 plus alpha 2 times v2. We know that initial conditions that are just on the line v1 go e to the negative lambda-- or e to the lambda t v1, so the whole system goes alpha 1 e to the negative lamb-- oh, sorry. I've got negative eigenvalues-- to the lambda t v1 plus alpha 2 e to the lambda 2 t v2. That's the great thing about all these linear systems. So what that means is, if I've drawn the eigenvectors, then I know exactly the entire phase plot of the system. So we're connecting back to the pendulum. I went through in all the different places. I thought about what the contributions were and mapped it out. Here I don't have to do that. I know that this system is going to-- the component-- the space that has-- in this eigenvalue is going to decay quickly towards 0. And it's going to decay faster than this one, which means an initial condition like this is going to go like this. I should have used one of my many other colors to do that. Looks like a blue-- so trajectories from that initial condition are going to do that. And trajectories from this initial condition are going to-- what are they going to do, if I start over here? They're going to go mostly down, and eventually we think it's stable, so it's going to get to the origin. I can even do my filled in circle there. When are they going to start bending towards the origin? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Later-- they have to pass this before-- they have to pass that and get the negative velocities before they can hook back in. So the trajectories look like this, and go in. Yeah? And likewise, so all these trajectories-- you can map out the entire phase portrait of the space pretty quickly just by understanding the eigenvalues and eigenvectors. Same thing's true over here. OK, so there's another example of a phase plot that we have. In the linear systems, it works out to be clean. You can just do these eigenvalues, eigenvectors. OK, now, control-- we're allowed to change k1 and k2. Changing k1 and k2 is going to change the phase portrait. It's going to change those vectors. I want to change them to make it do whatever I want. The first discussion is, what do we want to make it do? Maybe even before that, I should observe that, without thinking about optimality at all, it would be easy to stop here, because I-- if I look at this carefully, as long as I choose k squared-- k2 squared greater than 4, K1. I know I'm going to not oscillate. And I can just start driving k1-- and correspondingly, k2-- up as high as I like, and make the system get to the origin as fast as I want, and it won't oscillate. Why not just drive them all the way to infinity? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: You can't-- don't have a motive to do that. That's the first unsatisfying thing-- absolutely. Probably there's some unmodeled thing. Even if I did have a motive that could do that, there's probably some unmodeled things that I might excite, and cause bad things to happen anyways. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: You could melt the ice and it'll break. That's right. That's right. I guess I could have said wheels, and then maybe they'd melt the tires. OK. And you can see that here, actually. Remember? What is the unactuated phase plot of the system look like? I can just draw that. If u was just uniformally 0, if k1 and k2 were 0, what would the phase have looked like there? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: It would have just been x dot equals A of x, where A is this guy. So it wouldn't have been as interesting. Every point would have just had a vector like this. It would have been a little bigger with bigger velocities, but it's just-- it would just be a flow like that, which I hope is what you'd expect it to do, since it's an integrator. Things are just going to-- off into the ether. OK. So if I consider-- I started with this, and I'm getting out things that look like this, I'm already-- in my unitless cartoon here, it's sort of already looks like I'm using a lot of torque to do what I'm doing. I'm using a lot of force. I'm really significantly changing those dynamics in order to bend this thing to come around like that. That's OK, but we can do better. So today I want to use this system, which I think it's quite easy to have strong intuition for, to start designing optimal feedback controllers. So let's address the we don't have infinite torque problem first. One more comment on this-- I didn't actually call them poles-- there's a pole placement version of this too. It's exactly the same thing. If you were to draw a root locus, what would the system look like? The typical root locus would be you're multiplying the entire feedback by some linear term. You're not scaling them in some squared law, so what you get is, for very small feedback gains, you get oscillations. As you crank it up, the poles connect in the left half plane, and then they separate. And as I keep turning up my gains, one of them creeps towards the origin. The other one goes off really far to infinity. So just those of you think about poles and zeros, this is exactly the same way to say that. I didn't do a root locus because I was changing two parameters, but it all connects. OK, so now, let's say I have a hard constraint on what u I can provide. Let's just say that I have an additional constraint that's, let's say, the absolute value of u has got to be less than 1. Well, that changes a lot of things. My linear system analysis is impoverished now. If you want a graphical version of what that's doing, that's-- my zero input looked like that. I wanted to go like this with my linear controller, but maybe it's capped at something like this. OK, so what's that going to do to your system? If I just ran the policy u is some saturation, say, on negative kx, I took my same feedback-- linear feedback controller and I just said, if it's greater than 1 it's 1; if it's less than negative 1, it's negative 1, I think you're still OK. Trajectories are still going to get to the origin. They might take fairly long routes to the origin. You're not going to lose stability in this case because of that, but it starts to feel like, man, I should really be thinking about those hard constraints when I design my controller. All right. So how do we do that? One way to do that is optimal control. It's not the only way to do it. Let's formulate an optimal control problem. Let me sync up with my notes so I don't go too far afield. OK. Let's say my goal in life is to get to the origin as fast as possible in minimum time. But I'm subject to this constraint. So that's the famous minimum time problem-- subject to that constraint. OK. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yes. What do we want? Both the position and the velocity to be 0. Turns out you need this constraint for it to be a well-posed problem. If I didn't have constraints on u, then, like I said, I would just use as much u as possible. I would get there infinitely fast, and we haven't learned a whole lot. There are other ways to penalize u or something like that, but we're going to put a hard constraint on it here. OK, now, muster all your intuition about bricks and ice and tell me-- if I've got limited force to give and I want to get to the origin as fast as possible, what should I do? AUDIENCE: Bang-bang. RUSS TEDRAKE: Bang-bang. Good. He knows the answer. What should I do? People haven't thought about it and don't know bang-bang is. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Right. Right. So if I want to get there as fast as possible, I'm going to hit the accelerator, go as fast as I can until some critical point, where I'm going to hit the brakes. And I'm going to skid stop right into the goal. There's nothing better I can do than that. We're going to prove that, but I want to see-- that's a fairly complicated thing. It's something you can guess for the double integrator. You can't guess for a walking robot, for instance. But we want to get that out of some machinery that's going to be more general than double integrators. OK, so the proposition was bang-bang control. You might hear people casually say, bang-bang control's optimal, and that is-- if you have hard limits on your actuators, it's very common that the best thing to do is to be at those limits all the time. If that's the way you've defined the problem, bang-bang control solutions are pretty ubiquitous. They don't always work that well in real robots, because actuators don't like to produce zero-- infinite force and then-- or max force and then negative max force within a single time step. Good-- OK, so I think the only subtle part about it is figuring out when I need to switch from hitting the gas to hitting the brakes. So let's see if we can figure that out first. I think a pretty good way to do it is to think about what happens if you hit the brakes. And then you want to hit the brakes and arrive directly at the goal. There's only going to be a handful of states from which, if I was going at some-- if I was some position and some velocity and I hit the brakes full now, I'm going to land exactly at the goal. Let's see if we can figure out that set of states first. Let's think about the case where q is greater than 0 first. Just pick a side. So in that case, hitting the brakes-- is that positive 1 or negative 1? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Negative 1-- no, it's positive 1. You almost got me. If q is greater than 0, it's positive. q is greater than 0-- then u is positive. I want to be pushing back in the direction I'm already coming from, so u is positive 1. All right, so now, we're going to have some math ahead of us. See if we can integrate this equation. I can do that on the board for you. q dot of t-- I better get it right. [LAUGHTER] ut-- so in this case, it was 1-- plus q dot of 0. I'll just make it a little bit more [INAUDIBLE] This case, it was 1, and q double dot of t is-- sorry-- switch orders-- q0 plus q dot 0 t plus 1/2 ut squared. OK, I want to figure out-- AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Did I screw it up? What? AUDIENCE: [INAUDIBLE] AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Oh, sorry. Sorry. Thank you-- good. Thank you. OK, so let's figure out, if u is 1, what trajectories are going to get me so that q-- at some t final, qt and q dot of t are 0-- simple enough-- little manipulation. So it turns out I'm going to solve for q0 and q dot 0. So q dot 0-- looks like that's going to be negative u of t. It's a little bit weird, my notation here. I'm saying that the initial conditions are moving backwards. The equations are simple enough. I hope it's OK. And q0 t had better be-- it turns out to be 1/2 ut squared. So q dot of t is negative ut. Add those together. q0 is going to be 1/2 ut squared. If I solve for t-- solve out t-- in this case, u is 1 so t, say, is just negative q dot. So q of 0 is just 1/2-- let's keep that. This is just 1 t squared q dot 0 squared. If I plot that, what I've got in my state space-- q, q dot-- is a manifold of solutions, which starts at 0. And then I said that I did this for u is positive. And it goes like this. This one's not a solution. Where did I get that out? And one of my assumptions here when I inverted t or something that I-- that solution disappears. You can't have negative time. In fact, in my notes, I did it. I solved for the other t, which would have been better. Sorry. OK, so there's a line of solutions here-- which, if I started this q-- this is actually the positive queue, negative q velocities. I hit the brakes. I go coasting into the stop at the origin. Turns out, if I do-- if I think about the negative q case, I get a similar line-- similar curve. [INAUDIBLE] quadratic curve over here. You know what-- let me be a little bit more careful. Let me make that one pink, because this is the now u is negative 1 case. Good. We figured out the line of solutions where, if I hit the brakes, they get to the origin. Harness your intuition again. What do I do if I'm here? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Right. This was the stopping all the way to the goal. So pretty much, from anywhere else, I want to accelerate. So what does accelerating look like when I'm here? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: It's going to put me going up. And what happens is, any time I'm below this curve, I'm going to drive myself up. I can't go backwards like that and drive myself up, hit that, and ride it in. And if I'm above the curve, what do I do? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Have to overshoot a little bit-- I can't bend down more than this, so I'm going to ride it all the way over to here, connect up to this surface, and ride it in. And it turns out, any time I'm over here, the best thing to do is to-- did I get my colors wrong? Got my colors wrong-- let me fix that. Sorry. It's confusing. This is the accelerate, and then break. And this is the break. Let me just recolor it for you to make it a little more clear. Sorry about that. So let's say I'm pink over here, blue over here. OK. I want to be pink. I want to decelerate just like this if I'm above it because I want to take these curves that are almost there. If I've got extra time, I'm going to accelerate to the point where I decelerate again, so down here should be blue. And then this is, again, the case where I decelerate as much as I can until I take the pink line. This was the u equals negative 1, and this was the u equals positive 1. OK. Is that at all satisfying? We can now connect this back again to your phase plot pictures. We had our initial lines that looked like this. [INAUDIBLE] allowed to apply a bounded amount of torque. So the best thing I can do, if I'm right here, is I can warp this thing down to the point where I get right there. And if I'm here, I can warp it up to push me here, and then ride it down. The hard part is actually showing that that's optimal. And the reason I'm going to go through it is because it's-- it forms the basis for all the algorithms that are going to be more general. So let me show you that that's optimal. To do that, I need to introduce our first optimality ideas for the course. Are people OK with the-- that picture? AUDIENCE: [INAUDIBLE] below the line. RUSS TEDRAKE: Mm-hmm. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: So tell me where. Bottom right? OK. So this is the place where, if I decelerate, I get to the origin. If I'm here, then I have a little bit more velocity in the same position. So if I hit the brakes, I'm not going to stop in time. I don't quite decelerate fast enough to get here, because there's limited torque, so I just slip past it until my chance to come in the other way again. The only separation is that curve. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Everywhere up here, you want to be-- top left you said? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Here, you're blue. The same way, you want to accelerate as much as you can, because you want to get to the place where you have to hit the brakes. This is me. I'm at some position. I don't have enough velocity that I have to just hit my brakes, so I'm going to gun it until I'm at the velocity where I just have to hit my brakes, and then ride it in. Up here, I'm at the point where I have too much velocity. Even if I hit my brakes, I'm still going to overshoot a little bit, which means I'm going to have to-- and so you could think of it as now-- the word brakes maybe flips when you flip that cross across. Maybe that's the right thing. But the action I take is only changing based on that switching surface-- which, as you know, will be a nightmare for a lot of our reinforcement learning algorithms. This is a hard cusp. So if you have a control system that has a hard non-linearity like this, which is-- I'm doing one thing here, and I'm immediately, at some discrete place, doing a different thing-- that's a very non-linear event. And it's hard to get analytically when you're doing something more complicated than a double integrator, and it can be hard to get computationally. But we'll talk about that when the time comes. OK, so how the heck do I make an optimality argument about this? I want to introduce Pontryagin's minimum principle. OK. This is going to be a load of equations real quick, and we're going to tease them out. In general, optimal control problems are going to go-- are going to be formulated by designing a cost function. That cost function is some scalar quantity that I want to minimize. I'm going to use the symbols g of x you as an instantaneous cost function. I'll use h of x to mean final costs. I'm going to show you what this means in a second. And I'm going to use J of x to be a cost-to-go. It's very important-- all of these are scalars-- not vectors at all, just a scalar quantity. So a typical optimal control problem will be formulated as something like h of x capital T plus integral from 0 to T g of x u dt, subject to x dot of t is f of xu. Let's say the dynamics and x0 t is some-- let me it call it x0 here-- x0. This is a general cost function-- cost-to-go function form for optimal control. We're going to use it a lot. There's just a couple of things to note about it. So just to get some intuition, so g of xu-- that's things I'm penalizing throughout the trajectory. So maybe I want to do small actions in general, in which case, I could put some term in here, which penalizes me for having a u. I could put a u squared in here or something like that. Or maybe I want to just worry about being a long way from the origin. Maybe I'll put an x squared in here or something like this. h is a final cost. It's some function that only penalizes the final state of the system. So maybe I don't care what I'm doing for the first capital T seconds, but at time capital T, I want to penalize it for being away from 0-- x squared here or something like that. There's lots of different forms. The only thing that's really important to note about this, the only really restriction in the forms that you can play with, is that we do tend to assume this form, which is additive. It's integrable-- integrates some scalar cost g. So I don't look at multiplicative contributions of-- from x at time 1 and x time 4 or something like that. I'm only looking at additive cost functions. Assuming that form-- that additive cost form will make all the derivations work, roughly. OK, so for the minimum time problem, what is that form? You could formulate it a couple of different ways. In this case, I could actually have g of xu equals 1, and have capital T defined as the time, and have h of x equal 0. That's a perfectly reasonable optimal control formulation. So it certainly fits in this general optimal control form. OK, so now we need to know how to-- we've got this guess. I'm going to leave that hard-earned picture up there. I like this one too, but-- let me just say what Pontryagin's minimum principle is first, and then we'll make sure it makes sense. So for this general form, J of x is h of xT plus integral from 0 to T g x, u dt, subject to-- and I'm going to try to be very careful about writing these all every time. Let's assume for-- to begin with, capital T is fixed, just a parameter somebody chose. Let's say u is bounded to some set u. In our problem right now, it was negative 1 to 1, right? The minimum principle goes like this. We're going to define this new auxiliary function, the Hamiltonian, capital H. If I have found some optimal control solution-- I'll think of it in terms of-- the solution right now in terms of a trajectory, which is some sequence x star of t, u star of t. Then it must satisfy the following conditions. First of all, we know it must satisfy f of x star u star. That was already one of our conditions. And it has to satisfy the-- OK. There's a significantly less trivial one, which is that p dot of t has got to equal to negative partial h, partial x evaluated at x star, u star, p-- which, if h is what I had up there, works out to be partial g, partial x, plus partial f, partial x. Transpose p. And this auxillary variable that we had has to be the gradient of partial h evaluated x star t. One last condition-- this is 1, 2, 3. u star t had better be the argmin over u of h of x star, u, p. OK. Sorry. Cut that out. We're going to make sense of it now. OK, before we derive it-- and I'll just do a sketch of the derivation-- but before we derive it, let's just think about the implications. First of all, this says the optimal control trajectory must satisfy, which means it's a necessary condition for optimality. If I found some optimal trajectory x star, u star, some trajectory x, u, I can verify that-- a necessary condition is that all these things are hold, but that's actually not a sufficient condition in general. For linear systems that are convex-- linear dynamics that are convex and the cost function, it turns out it's OK, but in general, it's not always sufficient. And it says that, if I take my x and I integrate it forward in time, solving x by integrating my dynamics forward, and then I take this other function-- this new set of variables p, which happens to have the same size as x-- we'll see that-- and integrate it effectively backwards in time, because I have final condition on p-- if I do both of those things, and I can write down u as being the argument of what's left-- h, x-- then I've satisfied a necessary condition for optimality. Let's try to make sense of that. How many people have done optimization before at all? How many people have seen Lagrange multipliers before? OK, good-- so let me say a few things but not dwell. And there's a lot of information in the notes-- as fast as I can type. All right, in general, what I'm trying to do is I'm trying to minimize some function. In this case, I'm trying to minimize J. I'm trying to minimize this J of x by finding the u [INAUDIBLE] which minimizes then. But let's make it a little simpler just to make sure we get the basic idea. Let me just say J is some function of some parameter alpha. I'm trying to minimize J-- I can even do it even simpler. Let's just say minimize over x J of x. So if I have some function of x-- J of x looks like this-- I want to find the minimum. The first condition, the necessary condition, is that, at the minimum, the derivative of that thing better be 0. So I can check by just checking if partial J, partial x equals 0, that I've got a necessary condition for a minimum. That's actually a lot of it. The second part is the Lagrange multiplier part. Let's say x is a vector now-- a two-dimensional vector. Let's say I want to do the optimization min J of x, subject to the constraint that x1 equals x2-- or let's do something slightly more interesting. x1 plus x2 is 3. Turns out, thanks to the method of Lagrange-- one of his many methods-- solving this problem is no more difficult than solving this problem-- finding necessary conditions for this problem. By just making an augmented function, you can now minimize x and lambda of J of x plus lambda times this constraint-- which, in this case, is x1 plus x2 minus 3-- has to equal 0. It turns out, if partial J, partial lambda equals 0, then that means the constraint is enforced. Partial J, partial lambda in this is x1 plus x2 minus 3. If that equals 0, which is the condition I'm looking for anyways for the minimum, then I've now-- not only have I satisfied my constraint, but the remaining minima-- the minimization of this is this constrained solution to that optimization. The Lagrange multiplier method is very, very useful. If you don't know it, look it up. It's very good. Yeah? AUDIENCE: So in the partial J, partial lambda, that J [INAUDIBLE] partial is this new J-- RUSS TEDRAKE: Oh, sorry. Thank you. Thank you. Yep. Good catch, good catch-- thank you. Partial of-- I don't know-- that whole thing-- partial lambda-- thank you-- good catch. And in the method of Lagrange multipliers, lambda has an interpretation of a constraint force. What you're about to see is that all I'm saying in Pontryagin's minimum principle-- which is an absolute staple in optimal control-- is all I'm saying is that J of x is-- which is my cost function-- my cost-to-go function-- is at a stationary point with a Lagrange multiplier which enforces this dynamic. And that Lagrange multiplier happens to p. OK? So let's just see how that plays out. OK, so this is a sketch of the derivation of Pontryagin's minimum principle, which, I think-- I'm just going to do enough so you see where those things are and have some intuition about them-- a sketch of it based on the calculus of variations. So there's many other ways to do it-- calculus of variations, which is a scary name for a very simple thing. This is the problem solving-- J of x0 is h of x plus integral over g, subject to those constraints. So how do I write that in terms of a Lagrange multiplier? I'm going to do a second function, which I won't make the mistake of calling J again here. Let's call it S. Some function S is going to be h of xT plus the integral from 0 to T of g of xt, ut plus some Lagrange multipliers-- p in this case-- times my constraint, which is x dot minus f of xu. I was trying to use T's everywhere. [INAUDIBLE] OK, so now I can explicitly try to find the place where S-- which is my Lagrange multiplayer version of my problem, which has the explicit cost that I'm trying to minimize-- subject to the constraints that x better be-- satisfy my dynamics-- exactly the same as that two-second Lagrange multiplayer introduction. Now, getting it right is a little bit funny. So S is now-- you could think of S as a functional [INAUDIBLE] a function of functions. If I take a variation, this is just-- it's going to be exactly like your basic calculus, but the calculus of variations uses these symbols for a variation on a function is just going to be partial h, partial x times the variation in x of T plus the integral 0 dt Notice quickly that this thing inside here is just h. That's my Hamiltonian. So that thing inside there I can just-- OK, this says this is a variational analysis of S that says, if my function changes by some small amount in x tilde, this is the result of-- in changing S-- in S. Similarly, if my thing changes by a little bit in xt, or in ut, or pt, or in all of them simultaneously, this tells me what the variation's going to be in S. The stationary conditions then-- if I'm at an optima, what I care about is that-- if I change u a little bit, if I change x a little bit, if I change p a little bit, S better not change. That's my condition-- my necessary condition for optimality. If partial h, partial p is 0, then I know that changing p isn't going to change the solution, so I can look for stationary points with respect to p. Partial h, partial p better be 0. What's partial h, partial p? Well, it turns out it's just x dot minus f of xu, which is my forward dynamics. So if I've integrated my system forward in time, then this thing's going to be true, and [INAUDIBLE] steady state with respect to changes in p. OK, let's look at the changes with respect to x. So to get the contributions from x correct, we first need to worry about this x dot. We don't want to have that x dot floating around in there, so let's integrate by parts to get that out of there. We're going to look at this partial h, partial x-- the variations-- but first, integrate by parts. The integral of my p of t x dot t dt from 0 to t is just going to be p of T, x of T minus p0, [? x0 ?] minus the integral capital T of p dot t x of t-- I forgot my transpose-- dt-- basic integration by parts. OK, if I now take my variation in partial x, having used that, then what I get is-- which gives me-- which, in this case, was partial g, partial x transpose plus partial f, partial x transpose p of t. My goal is to show you enough of the derivation that you understand what these terms are, and not so much to get completely bogged down in it. If you want a good treatment, a careful treatment, you should see Bertsekas optimal control book. When I say careful treatment there, it's going to be 5 pages or more at least. OK, so Pontryagin's minimum principle says that, if my constraint is satisfied to x dot, and if I can just integrate back to this p dot backwards in time from some final conditions-- which are the same basic variation argument that drives this-- then I found Lagrange multipliers which satisfies that constraint. And the final variation says that u had better be the minimum of h of x. That puts me at a local minima in my constrained optimization-- big pill to swallow, but this is the way we're going to show that the brick solution is optimal. People are much more enthusiastic when there's bricks. It's OK. I understand. OK, so let's turn the crank and use this tool now to see what the heck-- see if we can verify our original bang-bang policy is optimal. So for the bang-- Pontryagin bang-bang double integrator-- what's the [INAUDIBLE] look like for this thing? g of xu we said was just 1-- and p transpose times x dot minus f of xu. I'll just write it out in elements forms. It's so simple. It's p1 times q dot plus p2 times u. OK? If we had derived our bang-bang controller just like this, then we could actually immediately say, what's the optimal control solution? If I want to take u star as the argmin, u in negative 1 of 1 of h x, u, p-- and what's it going to be, just looking at what I've got on the board here? This is a good time to make sure you get it. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yeah-- good. So these terms are-- have no impact. If p2 is positive, and I want to minimize this thing, then u better be negative-- as negative as possible. Negative as possible means negative 1. And if p2 is negative, then you should be as positive as possible to minimize that thing. So it turns out our same policy that we worked hard for in the Pontryagin, in terms of the Lagrange multipliers, works out to just be p2-- sine of p2 of t-- negative sine, sorry. So the sine function is just 1 if it's greater than 0, negative 1 if it's less than 0. My equations for p-- which, if I didn't use the word adjoint equations yet, I should have-- these equations for p are called the adjoint equations. My equations for p are pretty painless. So p1 dot is negative partial h, partial x1. x1 is q, so-- doesn't appear at all. That's 0. So that Lagrange multiplier isn't going to change at all. That's pretty painless. And p2 dot is negative partial h, partial x2, and that's negative p1 t. OK, so it turns out that p1-- my Lagrange multiplier's just going to be some constant. And p2-- t is just going to be the interval of that constant-- c2 plus c1 times t. Try and debate how much to squeeze in the next few minutes-- you know what, let's-- let me do it tomorrow for real-- or on Thursday for real, because I don't-- because it's going to take 10 minutes to finish, but it's worth doing it right. So for homework, to yourself, see if you can work it through. Take the equations that we had and show that these-- those four conditions are satisfied, and I'll spend the first 10 minutes of class doing it properly on Thursday. I don't want to rush through it and have it mean nothing. Sorry. I wrote 3 here. There's a condition-- a final condition on p2-- so three conditions by this number. Awesome. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_7_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu RUSS TEDRAKE: OK. Welcome back. So last time we talked about the cart-pole and the acrobat systems. Two of the model systems that are sort of the fundamentals of a lot of the underactuated control work in robotics. We really just talked about balancing at the top, showing that some of the linear optimal control things we've already done were consistent with balancing those underactuated systems at the top. And we said if we can, maybe today, design a controller to get as close to the top, then we could turn on something like a feedback controller based on the linear quadratic regulators to catch us. So we were basically able to do linear control at the top. And today we're going to do the swing up, which is in some ways easier, and in some ways harder. It turns out very simple controllers will get the job done. But showing that they get the job done is quite another story. And really my goal for today is to give you just a little glimpse into what the world of nonlinear control looks like for the other guys, for the people who really do nonlinear control. All right. We're going to mostly hurtle over this stuff and get to the computational versions, but if you were going to be sort of a nonlinear robotics control guy, you'll see a little bit of that today and just get a sense. I hope. And I'll finish the story with PFL, the partial feedback linearization, showing you the most general form and what it's good for in the swing up phase. OK. So non-linear control comes in a lot of varieties. I'm going to make this point a couple of times, but, roughly, one way to say what nonlinear control is that end up looking at your nonlinear equations of motion for a long time, and you try to find some tricks. So you can write down some non-linear controller, and then another trick just to prove that it works. OK. So one of the tricks that is pervasive in under actuated control, and other control, is to look at the energetics of the system. The energetics of-- the total energy of the system is obviously a very important quantity. You can imagine with our acrobat, or our cart-pole, we have one motor to work with, multiple state variables, right. But the energy is just a single quantity. So you can imagine with one motor possibly regulating the energy of the system, even if we can't regulate the entire trajectory of the system. OK. So that's one of the reasons it's important. So let's start by just thinking about if we can regulate the energy of the system, what good is that going to do us. OK. And again let's start simple. Let's do the derivation on the simple pendulum, then we'll do the cart-pole and acrobat immediately after that. So let's remember the phase portrait for the simple pendulum again. Let's say with no torque, no damping. We had a phase portrait-- I'm sure you remember-- which looked like this. And the way I had the coordinate system, we had a fixed point, unstable fixed point, fixed point. OK. If we started the system a little bit away from the fixed point, then we got these closed orbits, right. For the zero torque, zero damping case, we got these nice orbits. They were pretty circular, close to the origin. They got a little elongated as we went out. Look like an eyeball. And then there was this very special orbit. I didn't dwell on it before, but we're going to dwell on it now for a second. A special orbit which ends up going right to that unstable, fixed point. OK. What defines these orbits? Energy, someone said, right. These are lines of constant energy in the simple pendulum, right. And there is a particular line of constant-- orbit of constant energy here, which, if you have that energy, this system we know is going to go around this way. There's no other choice. If we have that energy, right, then it's going to find its way up to that fixed point. There's nothing else it can do. OK. So these orbits are called homoclinic orbits What's that? So a homoclinic orbit is something that goes from an unstable, fixed point to another unstable, fixed point. OK. Actually, homoclinic means it goes to the same unstable, fixed point. Heteroclinic means it goes to a different one, I guess. And in the simple pendulum with zero damping and zero actions. It's sort of easy to see that if we can design a trajectory which regulates the energy of the system, right, it's going to move. Let's say we set this as our desired energy for the system. If we can regulate the system's energy up to this, by applying a feedback law. That if we can get on that constant energy trajectory and stay there, then we're going to find ourselves getting to the unstable fixed point. OK. So that's an important observation. Now, regulating the energy isn't actually enough to stay there. Well is it? I should get to the-- if I really regulate my energy perfectly, I should get to the unstable, fixed point. But if I'm just doing energy regulation, then if someone were to knock me here, I'd be OK. I just come right back. If I knock me here, I'd actually go all the way around to this one. So it's not actually, doesn't actually stabilize. Regulating the energy doesn't actually stabilize this point. But it could get me there. OK. So the first thing we're going to do today is show that if I regulate the energy of my system, on the simple pendulum, to put myself on the homoclinic orbit then I can get myself to the fixed point. I turn on an LQR controller when I get there, and I've got a swing-up and balance controller. OK. All right. How do we regulate the energy of the simple pendulum? Let's just remember for the simple pendulum our equations of motion were m l squared theta dot plus m g l sin theta, according to the system I used, which was 0 down, equals my control input. And the energy is just 1/2 m l squared theta dot squared minus m g l cos theta, the total energy of the system. OK. So if I find myself in some theta theta dot configuration, what should I do to increase the energy the system? STUDENT: Push in the direction of theta dot RUSS TEDRAKE: Good. Yeah. Right. Whatever I want to do, whatever velocity I'm going at, if I push in the direction of theta dot, that's going to add kinetic energy to my system. OK. We can see that actually very explicitly, so what is the rate of change of e here? It's 1/2 m l squared theta dot theta double dot-- this half goes away-- minus m g l theta dot sin theta. I put my sin there again. And theta double dot is u minus m g l sin theta over m l squared. Can I do that? All right. So m l squared theta double dot is just u minus m g l sin theta plus m g l sin theta theta dot. All right. So that works out nice. It's just u theta dot. So it's exactly what we said. If you want to increase the energy of the system, right, then you better make u the same sign as theta dot. You can make it exactly theta dot. You can make it a constant. You can make it-- just as long as it's the same sign, you're going to increase the energy of the system. And similarly you can decrease the energy of the system by pushing against theta dot. And that's obvious. Really, that's just either damping the system with-- you're adding positive damping to the system or negative damping to the system right. OK. But seeing it out of these energy derivatives is the right way to see it if you want to do it for a more complicated system. OK. So good. So I can now imagine easily with my control torque regulating the energy the system, or increasing or decreasing it. So what do we want to do with it? Presumably we have some desired energy, e, which is the energy on that homoclinic orbit. Right. And what's that? What's the energy on the homoclinic orbit? M g l. Yeah. At the top it'll be [? cos of e ?] The only problem with this parameterization is I have negative energy. I could have done a little bit more carefully by putting zero potential at the bottom, but right. This one-- still our desired energy is positive and it's at the top, m g l. That's the energy you'd have if theta equals pi and theta dot equals 0. OK. So what we care about is regulating the difference between our actual energy of our system and the desired energy the system. The desired energy is constant. So this is just e dot. e tilde dot is just e dot, which is just e theta dot. So if I know to change e, I certainly know how to change e tilde. OK. So the cool thing here, let's choose u equals negative k theta dot e tilde, with k always greater than 0. The result then is e tilde is going to be negative k, which is positive, theta dot squared e tilde. Just a little manipulation. And that gives me that, as long as this thing is negative-- it's a linear equation in e-- then e tilde is going to go to 0. OK. And what is that? That's obvious. This is exactly the case of, I'm going to use u to add damping to my system. I'm going to add some negative damping. If e is greater than e desired, then this thing is positive, and the whole thing I'm adding, I'm going to be adding damping to the system to slow down. And if e is less than e desired, then I'm going to put in sort of negative damping. I'm going to invert the sign of this and add energy to the system until e is up to e desired. And this is going to give me a nice first order response in e. So that should really do the trick. And it doesn't take much looking at this, the proof would be a little bit harder, but it doesn't take much looking at this. If I did have something like bounded torque, bounded control input-- let's say I saturated u at some control limits-- as long as my sign is correct, I could still add or remove damping, according to the sine of tilde, and have this result that e tilde is going to go to 0. Yeah. OK. Even with bounded control inputs this is an OK. So what is the resulting phase portrait look like? OK so I have here exactly what we just said. e tilde is energy at the current state minus energy at the desired state. Let's just do negative k theta dot times e tilde as my control, with a pretty low k. And I remember what I called it here. That didn't stabilize it at the top. And so any small integration errors, or whatever is going to keep me from the top, is going to make me go around. But the phase portrait does exactly what we want it to do. This just started from two different initial conditions. One, that's the same one I've been using for all the phase plots. One just above the origin there. It adds energy up, goes to the homoclinic orbit, just misses the fixed point, goes around again. And then the other one started over here with too much energy, and actually slowed down to the homoclinic orbit. OK. Simple. So for the simple pendulum, we could design a nonlinear controller with pretty simple reasoning about the energy. And we know it's going to get as close to the up right. Does it also work for more complicated systems? What would it do for the cart-pole or the acrobat. It turns out it's a pretty general idea. OK. In fact, let's do the cart-pole, because the cart-pole actually has the pendulum dynamics embedded in it. It just happens to be coupled with the cart. OK. So if we, for instance, use our partial feedback linearization trick to decouple those dynamics, then we can really just keep thinking about pendula. OK. So here we go swing-up control-- non-linear swing-up control of the cart-pole system. So the dynamics were something I was willing to write on the board once and then set all parameters even gravity to one, and got some complaints but kept going. It turns out it would have been actually trivial for me to carry g around. I should have-- next time I'll do that next year. So I don't even have to write that today because we know if we apply a direct-- sorry, the collocated PFL, then the result of my little tricks with the equations is I can command a force. The collocated means I'm going to use the action to linearize the state, with that action associated. So here I'm going to linearize the dynamics of the cart. And the resulting system was x double dot equals my new control input. I could set that to be whatever I'd like. And what did it leave us with? it left us with theta double dot doing something a little funny but not too bad. ubar is sort of my desired acceleration here. OK so I've got some new, almost, pendulum dynamics here. And I've got obviously got this pendulum. And I've got this cart that I can do whatever the heck I want with. Because I've mastered the cart. So here's the idea. Let's make sure that the pendulum gets on the homoclinic orbit for the pendulum. That'll make sure it gets itself up to the top. And if we've done that, then we can just do a little bit of extra work to make sure that the car doesn't go crazy. OK. OK so you guys are actually going to-- on your problem set if you haven't already-- you're going to do the swing-up controller for the cart-pole. So I'm going to keep with my all parameters happened to be one lingo here, which forces you to handle the terms when you go to do it for the problems set. It's not bad, but it makes you think about it I think. OK. In my all parameters equal 1 land, the energy of the pendulum still it's just 1/2 m l squared. So it's just 1/2 theta dot squared minus m g l cosine theta. Just minus cosine theta now. And e desired is going to be 1. e tilde dot, that going to be theta dot theta double dot minus theta dot sine theta. theta double dot, though, is going to have a little bit different form than the simple pendulum. Because it had a few terms left over from the coupling with the cart. Right so if we stick those in we get theta dot negative u c minus s minus theta dot s STUDENT: Is it plus theta dot s or minus theta dot s? RUSS TEDRAKE: Good catch. Thank you. Thank you. I would have figured that out in a second when they things didn't cancel. Yeah so now this cancels leaving negative ubar cosine theta dot. OK. So we've got this slightly funky dynamics for the pendulum, now, because it's been augmented by the cart. But that's not too bad. So what should I do? What controller should I run to regulate this thing to the pendulum's homoclinic orbit? OK. So what do you want? You want ubar. It'll give us some constant. If I just make it, say, cosine theta dot. That'll make everything positive. There's a lot of ways to do it, but that's a pretty simple one. Yeah and I'll keep that e tilde there so that I regulate to the proper energy, not just to zero energy. And that should do it. So now e tilde dot should be negative k something squared, c theta squared e tilde, which means e tilde it's going to go to 0. Which means my pendulum's is going to get its homoclinic orbit, and get to its unstable, fixed point. OK. What about the cart? What's the cart going to be doing during all this? Yeah it's going to be going back and forth. Well it depends, I mean, it could actually wind up or something, depending on the trajectory. So for this system we actually have a simple-- if you know Lyapunov functions-- a simple Lyapunov function you construct from these energies. They would show that the pendulum gets to the top. If you want the pendulum and the cart to get to the origin, then we just get a little sloppy. And we say let's do, not u equals k theta e tilde c. We're missing a c. That's the first one. But let's add an extra term in minus k 2, the position of the cart, minus k 3, derivative of the cart. OK. All right. So sort of a sad fact, that actually doing this is what makes it work. It's trivial that, sort of, that it makes it work. Experimentally, you'll see it works just fine. It actually breaks all the proofs. There's a proof for a set of parameters of k. So someone found like if k equals 2 and k 3 equals 3, then I can design Lyapunov function. It's not very satisfying. OK. But let's see how this works in practice. So you remember your cart-pole system now. So these controllers were described very nicely by Mark Spong in a paper in '96 so that's my spot '96 controller. Basically, you can see I just check some distance between the current state and some sort of hand designed distance space there. I just see how close it is to the top. If it's close enough, I do LQR. If it's farther, I'm going to do this energy shaping control. The stuff that I commented out, I'm going to tell you about it later. But it turns out there's actually nice sort of analytical ways to get the basin of attraction using some semidefinite programming. We'll mention that later in the class. So you don't actually have to guess that metric, but to be fair, I guess that metric as something that is easy to guess. And that's good enough. OK. The LQR we did before, and the swing-up is just that simple. It's the pendulum energy. Not the total energy. It's just the energy of the pendulum on the system with a PD controller on the cart and the theta dot cosine theta e tilde dot controller on the pendulum. I put a saturation in there so it doesn't do crazy torques. Let's see what we get. Now linear control is easy. OK. And the phase plots exactly what you'd expect. It looks like my gains a little high maybe. But you know it's regulated itself pretty quickly out to that high energy orbit. And then it just missed its chance in the first one, but by the second time the energy was close enough, turn on the balance controller, grabbed it. And you saw that it actually grabbed it off to the side and then slowly pulled itself back towards the center. Different initial conditions. Yeah, so it works. It works well. It's easy right. OK. The reason I wanted to show you this, well first of all, it's cool. It's relevant. But I sort of want to give you a little glimpse into the world of nonlinear control. What tends to happen, if you're trying to come up with clever non-linear control solutions for systems, is you tend to have to examine carefully the equations of motion of your system. And sometimes you end up with these sort of bizarre looking feedback laws that work. Because they tend to cancel out the terms that you want to cancel out. So you can prove that something is always monotonically decreasing. Something like the energy is always monotonically decreasing or something like that. That's actually a pretty typical. That's a pretty representative case of a nonlinear control design. And there's lots of cool stuff working in that realm. So typically to prove it more carefully we'd look-- we'd design a Lyapunov function. We're not going to use it right now in this class. But if you haven't seen Lyapunov functions it's a good thing to know about. Jean-Jacques Slotine teaches his nonlinear control course in mechanical engineering. He uses Lyapunov functions. But he also uses other metrics to prove stability. His favorite, I think, right now is contraction analysis. OK. It's worth listing those two because there's actually not a lot-- that's not really a very long list-- there's not actually a lot more different solution techniques that I can put up here. There's actually only sort of a handful of ways people have to prove stability in non-linear systems, continuous systems. OK. So that's sort of approach one. Approach two, which we're going to do in this course, is let's not design specific controllers for specific systems. Let's turn it into an optimization problem and use our optimal control. OK. Good. So we know how to do the swing-up control of the cart-pole. It's pretty simple based on an energy argument. What would happen if I didn't use the PFL? I used the PFL here to make things analytically cleaner. But essentially what it's doing is the same thing as the simple pendulum. As it said, if the pendulum is moving this way, I want to add energy. Then just push it in that way. If it's derivative's this way, then push it in that way. There's a cosine which modifies things. Just because of the coupling really I think. Maybe the cosine's real. When you change signs you actually-- when you cross the origin you actually have to go the opposite way. Maybe the cosine's real actually. But it's basically just doing the same thing we did-- had our intuition for in the pendulum, which is push the system the way it's already going. So it turns out you can do the same thing without PFL and it probably works. To be fair, I even spent a minute finding different parameters. I could've made a really compelling demo using the same parameters from PFL which worked horribly for the other system. But I actually found some good parameters and if I do that one, it still gets there. It's not going to be quite as pretty. That was actually probably the worst I've seen. Thank you. OK but it'll still work. The PFL, though, made the motion of that card pretty beautiful, if you care about that. And it made the math easy enough that we can really derive these things. OK. So PFL really turns out to be an important tool in these systems. Yeah, I just said basically, let's put in this controller. Forget about the-- actually I did the whole controller, I think. I can look. And instead of putting it into ubar, which was my synthetic command through my PFL, let's just make that the force on the cart. Let's use this basic feedback law as the force directly on the cart. And forget about linearizing the cart dynamics. Yeah, because the intuition is right. It's just pushed that way to add energy. And if I add energy. I should get up near the top. And if it does work. And the PFL just makes it cleaner. OK. What about the acrobat? I don't spend a lot of time because it's pretty similar. What would you do for the acrobat? If you want to get the acrobat to balance, just add energy. And get itself towards the top. It's funny. Zach, in our group, he took the class last year. He's building our cool acrobat now. He's the one you saw him hold up the poster. But for his first reaction, which I think he did on the real hardware, was just do a stabilizing controller at the bottom. And then flip the gains. So it just went kind of crazy. And I think that sort of worked, which is-- doesn't say a lot about how hard the problem is. But there's a more elegant way to do it. OK. How would you do it? Given this sort of line of thinking. We've got PFL at our disposal. We want to add energy. What would you do? Yeah? No, but you're close. Well so collocated would get-- it controls the elbow, because the motors at the elbow. Yeah. So if you use collocated and you could still solve-- add energy into the first link so that it swings up like a pendulum. That's the same way we sort of add an extra term to keep the cart. Near zero we add an extra term to keep the second joint close to zero. That's exactly what it is. OK. So I'll just write it quickly because it's that simple. We're going to use collocated PFL to control theta two and energy shaping to drive theta 1 to up right. So it turns out-- how do you add energy in the acrobat? if you want to add energy to the theta 1, whatever direction that one's moving, make the theta 2 move in that direction too. OK. So u bar, the command to your PFL controller should be q 1 dot e tilde, k greater than 0 It works out to be that simple. And we're going to, in general, we'll add another something to make sure that theta 2 doesn't deviate too far from where it went, from straight out. Interestingly, the energy that people use-- that Spong uses in his paper is not the energy sort of the simple pendulum created by link one. It's actually the total energy of the system and that still works. STUDENT: You couldn't use the total energy of the cart-pole to try and invert that pendulum? RUSS TEDRAKE: My guess is it would still work, but the proof is based on-- on not that. So I should say there is no proof, for the acrobat, that it will get to the top, that I know of. Spong's paper in '96, beautiful paper, says, we conjecture that you can show that this thing is semiglobal stable. The other thing I didn't say is that this thing actually doesn't work if you started at 0 0. Because if theta dot zero, you're not going to actually do anything. So it's not globally stable. It's semiglobally, so we need to knock a little bit. Have Zach pull with the string across through. But they conjecture that it's semiglobal, despite the fact that the energy-- that there might be multiple solutions. The picture of the homoclinic orbit, I'll admit, is not as clean for the acrobat. But I think the intuition is that if you're doing some work to keep theta 2 near zero, then it's looking-- it's moving a lot like a pendulum, and you're thinking about the homoclinic orbit of that big pendulum. And it gets up there. I think these terms prevent it from sort of being in this lots of different configurations to get to the top. But presumably the proof of semiglobal, semiglobality, I guess, would be dependent on kl and k2 or k2 and k3. Good. People don't seem jumping out of your chair excited about that, but I hope you like it and get it and get it. It's an important class of derivations. All right so in that trend. So we used PFL, now, to do-- to basically make energy shaping arguments simple. And to actually even simplify the analytics of those arguments. But actually PFL can do a lot of things. Disclaimer, PFL is bad. Don't use it if you care about system dynamics, but it'll do a lot of things for you. So let's just see how much it can do. I want to show you the slightly more general form, which is not, surprisingly, is not as common in the literature And just one general form, I guess, but this is using the notion of a task space. OK. So last time, last time we talked about using partial feedback linearization to control all of your actuated joints perfectly, or to control all of your unsaturated joints perfectly. But naturally you'd like to see a form where you could control one unactuated joint, one actuated joint. Maybe you don't actually even care about controlling a particular actuators. Maybe what you care about is, let's say, controlling the end effector of your machine. So let's say I'm 120 degree of freedom robot with a few unactuate joints. I'm not actually bolted to the ground. Maybe my shoulder doesn't work today. And I want to control the endpoint of my arm right. We should be able to do that I think. And, intuitively, as long as my task space is coupled to my motors. I can have sort of 10 actuators on this robot and try to control the parts of joint over there. As long as they're coupled with this inertial coupling idea. And I have sort of enough motors, as many motors as I have degrees of freedom of the thing I'm trying to control, you'd like to think that would work. So for the cart-pole, let's say, I'd like to think I could regulate-- I can't regulate the endpoint of the pole perfectly, because that would be x and y. It would be the two variables and I've only got one motor, but maybe I should be able to regulate the y position of the endpoint. On the acrobat, I should be able to regulate the y position of the end effector. Or, in little dog, the robot I showed you, we do regulate the center of mass of the robot using all the internal joints on the machine. So let's see the slightly more general form. And then put it to use. OK. I'm still going to make the assumption that we have joined with zero torque and joints with torque. So I can use this form. I'm sure you can generalize it further. But excuse me. OK where I'm using q1 here to represent my passive joints. And let's say that there are m of them and q2 to represent my active joints, my actuator Let's say there are l of them. And let's say the q, m, n. So that's my equations of motion of the system I'm working on factored into block matrix form based on where the actuators are. So let's try to control some subset of the actuated and unactuated joints. And in general we'll do that by defining some output function. Some function of just the full vector q, but we're not of q dot for now. And let's say y lives in p dimensional space. Just running out of letters here. OK. Let me tell you the answer first and then we'll make sure it works. So let's define j1 to be partial f partial q1, j2 to be partial f partial q2. And j is the full thing. So j1 is going to be what, p by m. And j2 is going to be p by l. And j is going to be p by n. My claim is if you command the actuated joints to do this. OK. It's a little bit of a big pill to swallow, but I'm using this to represent the right Moore-Penrose pseudoinverse in general. OK. And there's nothing too egregious just yet. We've got h11 is invertible, we said. So that sort of looks OK. We're going to have to ask how invertible j bar is. It's got a j2 in there. j2 is not square. So we're going to have to think about that a little bit. That's why I have to use the pseudoinverse here. Good. So let's see if we believe that. Oh boy. This is subject to the rank of j bar equaling p. And I don't want to write that on the next board. That's got to go there. OK. So let's just think about the dimensionality here. So j2 is p by l. So a number of things I'm trying to control by the number of things I have to control them. j1, p by m, this one's going to be m by m. This is going to be m by l. So this thing is going to work out to be p by l, as we expected. So I'm saying that I'm going to write j pseudo inverse because it's not square. But I only really want to run this controller if the rank of jbar is equal to p. It's not going to be greater than p ever. Because the matrix has only got p rows. So the trivial condition is I need to have at least as many actuators as I have things I'm trying to control. If l is at least p, that's a necessary condition. I'm not going to get the if-- if I have less actuators than what I'm trying to control, this rank condition will always fail. If I have four actuators and two things I'm trying to control, but three of the actuators are on a different robot, then it's also going to fail. So it's not only the case that the counting game works. But if l is greater than p-- I'm sorry to be throwing around all these things-- if you have more actuators than things you're trying to control, then you should go ahead and use that pseudoinverse because you have choices. At any given time, if you want to produce a certain acceleration in your output space, whether it's the end effect or whatever. There are actually there's a manifold of solutions you can use to try to produce that same acceleration of the output. So for those of you that have used sort of Jacobians and null spaces a lot in the previous robotics course, this is perfectly compatible then with sort of a null space decomposition. If you wanted to say I have a first priority task in this joyed space, and then a second priority task that works on the null space of that controller, that stuff all works in this framework. But I won't go into the details there. OK so j pseudoinverse could have a null space if you have more actuators than things you're trying to control. So we quickly make sure we think that works So what do we do? We started with y is f of q. So we want to know how that thing's going to change. That's going to be j of q dot. And y double dot is going to be jdot qdot plus j, because remember j is partial f partial q right, plus jq double dot, which I'm going to break apart into j1 q1 double dot plus j2 q2 double dot. So I've got some contribution from the effects of my passive joint. Some from the active joints. We can use this first equation here to figure out what the accelerations of my passive joints are going to be with respect to the accelerations of my active joints. The same thing we did before. So I can easily write down here that q1 double dot had better be negative h11 inverse, h12 q2 double dot plus phi1. If you stick that in, then you end up with y double dot is jdot qdot minus j1 h11 inverse h12 q2 double dot plus phi1. I would write this down if it's in your notes here, which happens to be jdot qdot plus jbar q2 double dot minus j1 h11 inverse phi1. Therefore, If I can invert jbar then, where'd my control go. If you read it off, jbar inverse is going to cancel out that term. I'm going to subtract off the jdot qdot, and add back in the j1 h inverse phi1. And I'm going to be left with y double dot equals v. So that works. That's not too shocking. What's cool is that this task space PFL is sort of all, you need to know. if you're willing to remember this one, then the collocated and the non collocated come out for free. So if I were to choose and do the collocated, then that means my output function is this, which one, just q2 which means the partial with respect to q1 is 0. The partial with respect to q2 is just i. jdot is 0. jbar works out to be i if you look at It. Long story short, everything drops out. You get exactly the collocated feedback that you wanted before. What's even cooler is the non-collocated does the exact same thing. The non-collocated, if you make y equal q1 and you put it through this task space PFL, not only do you get the same controller out that you would have gotten, but this rank condition on jbar for the collocated case, it's trivially true, which you'd expect. And for the non-collocated case it turns exactly into the strong inertial coupling condition of the other derivation. OK. So now we can do something interesting. I'm sorry to write so many symbols, but, because you're all falling asleep. Let's do something interesting with it. OK. So let's do the example I said. Let's take the cart-pole and make it go through some y trajectory, a sine wave or something like. What would it take to make the cart-pole go like this for a long time? Speeding up and slowing down, but always actually moving in one direction. So my information is not going to be too pretty. I tried to come up with a really clever task space, five minutes before class, that would stay on the screen and regulate the output. I'm going to show you that leaves the screen quickly, but at least plot that it found the output. OK. So if I make y equal, this y, poor choice, is just the vertical position of the end point through the kinematics of the cart-pole. Then, you could see it here. So why is just negative l cosine of the x2. I think this is my y desired is this y underscore. d Is my shorthand for desired. It's just some sine wave that's going to go between 1/4 of the length and 1/2 the length, or sorry 3/4 of the length. It would be unfortunate if I asked it to go somewhere. It couldn't go because things would blow up. My rank condition would stop being satisfied, but it's nice that it would at least alert me to that fact. OK. I put those exact equations I just showed you in a few lines of MATLAB. And I called it task space. There it goes, but after 10 seconds and it's quite far away from anything we're going to see. I can show you that a regulator that operate beautifully. OK If you want to see a little bit more of it, I can zoom out. So it's really gone. OK. But again the output regulated beautifully. OK. So in retrospect the cart-pole might not have been the way to show you the power of partial feedback linearization, except for the fact that-- I mean if you really did care. Maybe you're a cart-pole bartender or something. You have to really keep some things steady. But, maybe, that's I guess it's still bad to drive away to infinity. Where you going? OK. So little dog is a good example, where we could nicely regulate the center of mass of the dog. And have it tip up and act like a simple pendulum, even though it had lots and lots of joints, and a passive joint in the middle. OK so task space are actually a pretty fundamental concept in robotics. And they haven't seen them use that much in underactuated robotics, but they work just fine if you're going to use partial feedback linearization, which is bad. OK. Good. Well I was hoping to have an acrobat demo today, but Zach tells me were going to do it on Tuesday instead. And we're going to use it in the future throughout. So it's the same way I go back to the pendulum, we're going to go back and show how the better algorithms work on the acrobat and things like that. So you will see hardware again very soon. But not today. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_22_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: Today is sort of the culmination of everything we've been doing in the model free optimal control. OK. So we talked a lot about the policy gradient methods. So under the model free category here. And we've talked a lot about model free policy gradient methods. And then the last week or so, we spent talking about model free methods based on learning value functions. OK. Now both of those have some pros and some cons to them. OK? So the policy gradient methods, what's good about policy gradient methods? STUDENT: They scale-- RUSS TEDRAKE: They scale with-- [INTERPOSING VOICES] RUSS TEDRAKE: OK. And they can scale well to high dimensions. We'll qualify that. Right? It's actually still only a local search, that's why they scale. And the performance of the model free methods degrades with the number of policy parameters. But if you have an infinite dimensional system with one parameter you want to optimize, then you're in pretty good shape with a policy gradient method. Right? OK. What else? What are other pros and cons of policy gradient methods? What's a con? Well, I said a lot of it in the parentheses already. But it's local. What are some other cons about policy gradient methods? STUDENT: [INAUDIBLE]. RUSS TEDRAKE: Yeah. Right. So this performance degradation typically is summarized by people saying they tend to have high variance. Right? Variance in the update, which can lead to mean that you need many trials to converge. I mean, fundamentally, if we're sampling policy space and making some stochastic update, it might be that it requires many, many samples, for instance, to accurately estimate the gradient. And if we're making a move after every sample, then it might take many, many trials for us to find the minimum of that. It's a noisy descent. Yeah? STUDENT: You also have to choose like a [INAUDIBLE].. RUSS TEDRAKE: Good. That wasn't even on my list. But I totally agree. OK. I'll put it right up here. There's one other very big advantage to the policy gradient algorithms. We take advantage of smoothness. They require smoothness to work. That's both a pro and a con, right? But the big one that we haven't said yet, I think, is the convergence is sort of virtually guaranteed. You're doing a direct search. And exactly, you're doing a stochastic gradient descent in exactly the parameters you care about. Convergence is sort of trivial and guaranteed. OK? That turns out to be probably one of the biggest motivating reasons for the community to have put their efforts into policy gradient. Because if you look at the value function methods, in many cases-- now, I told you about one case with function approximation, still linear function approximation, where there are stronger convergence results. And that was the least squares policy iteration. But most of the cases we've had, the convergence results were fairly weak. We told you that temporal difference learning converges if the policy remains fixed. OK? But if you're not careful, if you do temporal difference learning with the policy changing, with a function approximator involved, convergence is not guaranteed. OK? In fact, they often, I mean a lot of these methods struggle with convergence. Not just the proofs, which are more involved. But there's a handful of, I guess, sort of in the big switch from value methods to policy gradient methods, there are a number of papers showing sort of trivial examples of-- can I call them TD control methods? So temporal difference learning where you're actually also updating your policy of TD control methods with function approximation, which diverge. Right? There was even one that-- I think it might have been, I forget whose-- it might have been Lehman Baird's example. But where they actually showed that the method will actually oscillate between the best possible representation of the value function and the worst possible representation of the value function. And it's sort of stably oscillated between the two. Right? Which was obviously something that they cooked up. But still, that makes the point. Right? Even the convergence result we did give you for LSPI, least squares policy iteration, still had no guarantee that it wasn't going to, it could certainly oscillate. They gave a bound on the oscillation. But that bound has to be interpreted. Even the LSPI could still oscillate. And that's one of the stronger convergence results we have. OK. But they're relatively efficient. Right? So we put up with a lot of that. And we keep trying to use them because they're efficient to learn in the sense that you're just learning a scalar value over all your states and actions. That's a relatively compact thing to learn. I told you, I tried to argue last time that it's easier than learning a model even by just dimensionality arguments. And they tend to be efficient because the TD methods in particular reuse your estimates. And they tend to be efficient in data. They reuse old estimates. They use your old estimate of the value function to update your new estimate of the algorithms. So when they do work, they tend to learn faster. And they can, with the least squares methods, they tend to be efficient in data. And therefore, in time. Number of trials. When these things do work, they're the tool of choice. The problem is-- and there are great examples of them working-- but there's not enough guarantees of them working. And if you want to sort of summarize why these value methods struggle and why they can struggle to converge and they even diverge, you can sort of think of it in a single line, I think. The basic fundamental problem with the value methods is that a very small change in your estimate of the value function, if you make a little change, can cause a dramatic change in your policy. Right? So let's say my value functions tip this way. Right? And I change my parameters a little bit. Now it's tipped this way. My policy just went from going left to going right, for instance. And now you're trying to update your value function as the policy changed. And just things can start oscillating out of control. Does that make sense? OK. That's a reasonably accurate, I think, lay of the land in the methods we've told you about so far. If you can find a value method that converges nicely, use it. It's going to be faster than a policy gradient method. It's more efficient in reusing data. You're learning a fairly compact structure. Value iteration has always been our most efficient algorithm, when it works. But the policy gradient algorithms are guaranteed to work. And they're fairly simple to implement. And they can just be sort of local search in the policy space. Directly in the space that you care about, really your policy. So the big idea, which is the culmination of the methods we've talked about in the model free stuff so far, is to try to take the advantages of both by putting them together. Represent both a value function and a policy simultaneously. There's extra representational costs there. But if you're willing to do that and make slower changes to the policy based on guesses that are coming from the value function, then you can overcome a lot of the stability problems of the value methods. You get the strong convergence results of the policy gradient. And you get some of the more, ideally, efficiency. You can reduce your variance of your update. You make more effective updates by using a value function. OK? So the actor is the playful name for the policy. And the critic is your value estimate telling you how well you're going to do. And one of the big ideas there is you'd like it to be a two time scale algorithm. Policy is changing slower than the greedy policy from the value function. OK. So the idea is an actor critic are actually very, very simple. The proofs are ugly. There's only a handful of papers you've got to look at if you want to get into the dirt. But these, I think, are the algorithms of choice today for a model free optimization. OK. So just to give you a couple of the key papers here. So Konda and Tsitsiklis. John's right upstairs. Had an actor critic paper in 2003 that has all the algorithm derivation and proofs. Sutton has a similar one in '99 that's called Policy Gradient. But it's actually the same sort of math as in Konda and Tsitsiklis. And then our friend Jan Peters has got a newer take on it. He calls it Natural Actor Critic, which is a popular one today. It should be easy to find. OK. So I want to give you the basic tools. And then instead of getting into all the math, I'll give you a case study, which was my thesis. Works out. So probably John already said quickly what the big idea was. Right? So John told you about the reinforced type algorithms and the weight perturbation. In the reinforced algorithms, we have some parameter vector. Let's call it alpha. And I'm going to change alpha with a very simple update rule. In the simple case, maybe I'll run my system twice. I'll run it once with the-- I'll get once, I'll sample the output with alpha. And then once I'll do it with alpha plus some noise. Let's say I'll run it from the same initial condition. Compare those two. And then multiply the difference times the noise I added. Right? And that's actually a good estimator, a reasonable estimator of the gradient. And if I multiply by the learning rate, then I've got a gradient descent type update. OK? So this is not useful in its current form. John told you about the better forms of it, too. But the problem with this is that I have to run the system twice from exactly the same initial conditions. You don't want to run two trials to simulate the thing exactly twice for every one update. And it sort of assumes that this is a deterministic update. The more general form here would be to not keep, not run the system twice. But use, for instance, some estimate of what reward I'd expect to get from this initial condition. And compare that to the learning trial. So we just went from policy gradient to actor critic just like that. This is the simplest form of it. But let's think about what just happened. So if I do have an estimate of my value function, I have an estimate of my cost to go from every state. Right? Then that helps me make a policy gradient update. Because if I run a single trial, then I can compare the reward I expected to get with the reward I actually got very compactly. OK? So this is the reward I actually got. I run a trial, one trial. Even if it's noisy with my perturb parameters, I change my parameters a little bit. I run a trial. And what I want to efficiently do is compare it to the reward I should have expected to get, given I had the parameters I had a minute ago. Right? That's nothing but a value function right here. OK? So the simplest way to think about an actor critic algorithm is go ahead and use a TD learning kind of algorithm. Every time I'm running my robot, go ahead and work on in the background learning a value function of the system. And simply use that to compare the samples you get from your policy search. Do you guys remember the sort of weight perturbation type updates enough for that to make sense? Yeah? STUDENT: So in this case, that [INAUDIBLE] into your system but just through some expectation. RUSS TEDRAKE: Excellent. That's where you're getting it. From temporal difference learning. In the case of a stochastic system, where both of these are going to be noisy random variables, this actually can be better than running it twice. Because this is the expected value accumulated through experience. Right? And that's what you really want to compare your noisy sample to the expected value. So in the stochastic case, you actually do better by comparing it to the expected value of your update. What you can show by various tools is that comparing to the expected value of your update, which is the value function here, can dramatically reduce the variance of your estimator. OK? You should always think about policy gradient as every one of these steps trying to estimate the change in the performance based on a change in parameters. But in general, what you get back is the true gradient plus a bunch of noise, because you're just taking a random sample here in one dimension of change. If this is a good estimate of the value function, then it can reduce the variance of that update. Question? STUDENT: [INAUDIBLE]. RUSS TEDRAKE: The guarantees of convergence are still intact because you're doing gradient descent. You can actually do, you can do almost anything here. This can be zero. And gradient descent, the policy gradient actually still converges. It doesn't converge very fast. But you can still actually show that it'll, on average, converge. OK? So it's actually quite robust to the thing you subtract out. Because, especially if this thing doesn't depend on alpha, then it has zero expectation. So it doesn't even affect the expected value of your update. So it actually does not affect the convergence results at all. So the convergence results are still intact. But the performance should get better because you have a better estimate of your J. Right? And that should be intuitively obvious, actually. Right? If I did something and I said, how did I do? And [INAUDIBLE] just always said, you should have gotten a four every single time. If I got a lousy estimator of how well I should have done, I'd say, OK. Look, I got a six that time. And he says, you should have had a four. Six, you should have had a four. Then he's giving me no information. And that's not helping me evaluate my policy. Right? If someone said, OK. We did something a little different. I expected you to get a six, but you got a 6.1. Well, that's a much cleaner learning signal for me to use. STUDENT: [INAUDIBLE] the worst possible-- RUSS TEDRAKE: Yeah, absolutely. So that's the important point is that it's got to be uncorrelated with the noise you add to your system. OK? If it's not correlated with the noise you add in, then it actually goes away in expectation. So the variance can be very bad if you have the worst possible value estimate. But the convergence still happens. Like I said, zero actually works. Right? Which is sort of surprising. Right? If I have a reward function that always returns between zero and 10, and I'm trying to optimize my update, then I would always move in the direction of the noise I add. But I move more often in the ones that gave me high scores. And actually, it still does a gradient descent on the cost function. It's actually worth thinking about that. It's actually pretty cool that it's so robust, that estimator. But certainly with a good estimator, it works better. I don't know how much John told you. But we don't actually like talking about the variance. We like talking about the signal to noise ratio. Did you tell them about the signal noise ratio, John? STUDENT: I don't remember. RUSS TEDRAKE: Quickly? Yeah. So John's got a nice paper. Maybe he was being modest. John has a nice paper analyzing the performance of these with a signal to noise ratio analysis, which is another way to look at the performance of the update. So that's enough to do, to take the power of the value methods and start putting them to use in the policy gradient methods. OK? The cool thing is, like I said, as long as it's uncorrelated with z, it can be a very bad approximate of the value function. It won't break convergence. The better the value estimate, the faster your convergence is. OK? This isn't the update that people typically use when they talk about actor critic updates. The Konda and Tsitsiklis one has a slightly more beautiful thing. This is maybe what you think of as an episodic update. Right? This is, I just said we started initial condition x. Maybe I should right an x zero or something. But we just start with initial condition x. We run our robot for a little bit with these parameters. We compare it to what we expected. And we make an update maybe once per trial. That's a perfectly good algorithm for making an update once per trial. There's a more beautiful sort of online update. Right? If you actually want to, let's say you have an infinite horizon thing. Infinite horizon problem. There's actually a theorem, I've debated how much of this to go into. But I'll at least list the theorem for you because it's nice. They call it the policy gradient theorem, which says partial J partial alpha, where in the infinite horizon case typically there's different ways to define infinite horizons. This is typically done in an average reward setting. It can be made to work for other formulations. But I'll just be careful to say the one that I know it's a correct proof for. The policy gradient can actually be written as-- let me write it out. This guy is the stationary distribution of executing of the state action, of executing pi of alpha. This guy is the Q function executing alpha, the true Q function. And this is the state action. And this guy is actually the gradient of the log probabilities, which is the same thing we saw in the policy gradient algorithms. The log probabilities of executing pi. Yeah. Gradient of the log probability. I'm not trying to give you enough to completely get this. But just I want you to know that it exists and know where to find it. And what it reveals is a very nice relationship between the Q function and the gradients that we were already computing in our reinforced type algorithms. OK? And it turns out an update of the form-- this is gradient of the log probabilities again. I'll just write it. That would be doing gradient descent on this if you're running from sample paths. This term disappears if I'm just pulling x and u from the distribution that happens when I run the system that gives me this stationary distribution coefficient for free. OK? And then if I could somehow multiply the true Q function times my eligibility-- this one, I definitely have access to. This one, I can only guess, because I have access to my policy. I can compute that. But this guy, I have to estimate. OK? So if I put a hat on there, then that's actually a good estimator of the policy gradient using an approximate Q function. And in the case where you hold up your updates for a long time and then make an estimate in an episodic case, it actually results in that actual algorithm. OK? Getting to that from with a more detailed explanation is painful. But it's good to know. I think the way you're going to appreciate actor critic algorithms, though, is by seeing them work. OK? So let me show you how I made them work on a walking robot for my thesis. I've already done this. Is it going to turn on? Since I think everybody's here, maybe we should do, while it's booting, I'll do a quick context switch. Let's figure out projects real quick. And then we'll go back. I don't want to run out of time and forget to say all the last details about the projects. Yeah? Somehow, I never remember to post the syllabus with all the dates on there. We're posting it now. But I can't believe I didn't post a long time ago on the website. But I hope you know that the end of term is coming fast. Yeah? And you know you're doing a write up. Right? And that write up, we're going to say that the 21st, which is basically the last day I can possibly still grade them by, the write up as described, which is sort of I said six pages-- sort of an [INAUDIBLE] type format-- is going to be due on May 21 online. OK. But next week, last week of term already, we're going to try to do oral presentations so you guys can tell me-- eight minutes each is what works out to be. You get to tell us what you've been working on. OK? For each project, there are a few of you that are working in pairs. But we'll still just do eight minutes per project. And we have 19 total projects. So I figure we do eight-- sorry, nine-- next Thursday, which is going to be the 14th. Is that right? 5-14. And nine on 5-12, working back here, which leaves some unlucky son of a gun going on Thursday. And the way I've always done this is I have a MATLAB script here that has everybody's name in it. Yeah. Why is it not on here? OK. I have a MATLAB script with all your names in it. OK? And I'm going to do a rand perm on the names. And it'll print up what day you're going. STUDENT: Maybe in fairness to that person, would we all be happy to stay an extra eight minutes on whatever it is? Tuesday? RUSS TEDRAKE: Let's do it this way first. And then we'll figure it out. [LAUGHTER] And yes. So I'm going to call rand perm in MATLAB. And for dramatic effect this year, I've added pause statements between the print commands. [LAUGHTER] So we should have a good time with this, I think. I will, at least. OK. Good. Let's make this nice and big. I actually was going to just use a few slides from the middle of this. But I thought I'd at least let you see the motivation behind it, which very well. And I'll go through it quickly. But just to see at least my take on it in 2005, which hasn't changed a whole lot. It's matured, I hope. But I've told you about walking robots. We spent more time talking about passive walkers than we talked about some of the other approaches. But there's actually a lot of good walking robots out there. Even in 2005, there were a lot of good ones. This one is M2 from the Leg Lab. The wiring could have been cleaner. But it's actually a pretty beautiful robot in a lot of ways. The simulations of it are great. It hasn't walked very nicely yet. But it's a detail. [LAUGHTER] Honda's ASIMO had sort of the same sort of humble beginnings. As you can imagine, it's not really fair that academics have to compete with people like Honda. Right? I mean, so our robots looked like what you saw on the last page. And ASIMO looks like what it looks like. But it's kind of fun to see where ASIMO came from. So this is ASIMO 0.000. Right? And this is actually the progression of their ASIMO robots. That's the first one they told the world about in '97. Rocked the world of robotics. I was in the Leg Lab, remember. At the time, we were kind of like, oh wow. They did that? Wow. That sort of certain changes our view of the world. That's P3. And that's ASIMO. Right? Really, really still one of the most beautiful robots around. You know about under-actuated systems. I don't have to tell you that. You know about acrobots. You know walking is under-actuated. Right? Just to say it again-- and I said it quickly-- but essentially, the way ASIMO works is they are trying to avoid under-actuation. Right? When you watch videos of ASIMO walking, it's always got its foot flat on the ground. There's an exception where it runs with an [? arrow ?] phase that you need a high speed camera to see. But-- [LAUGHTER] It's true. And that's just a small sort of deviation where they sort of turn off the stability of the control system for long enough. And they can recover. Their controller is robust enough in the flat on the ground phase that they can catch small disturbances which are their uncontrolled aerial phase. So for the most part, they keep their foot flat on the ground. They assume that their foot is bolted to the ground, which would make them fully actuated. Right? And then they do a lot of work to make sure that that assumption stays valid. So they're constantly estimating the center of pressure of that foot and trying to keep it inside the foot, which means the foot will not tip. And this is if you've heard of ZMP control, that's the ZMP control idea. OK? And then they do good robotics in between there. They're designing desired trajectories carefully. They're keeping the knees bent to avoid singularities. They're doing some-- depends on the story. I've heard good claims that they do very smart adaptive trajectory tracking control. I've heard more recently that they just do PD control. And that's good enough because they've got these enormous gear ratios. And that's good enough. OK. So you've seen ASIMO working. The problem with it is that it's really inefficient. Right? Uses way too much energy. Walks slowly. And has no robustness. Right? I've told you that story. Here's one view of everything we've been doing in this class. The fundamental thing that ASIMO is not doing in its control system is thinking about the future. OK? So if you were taking a reinforcement learning class, you would have started off with talking about delayed reward. And that's what makes the learning problem difficult. Right? I didn't use the words delayed reward in this class. But it's actually exactly the same thing. The fact that we're optimizing a cost function over some interval into the future means that I'm thinking about the future. I'm planning over the future. I'm doing long term planning. And if you think about having to wait to the end of that future to figure out if what you did made sense, that's the delayed reward problem. It's exactly the thing that reinforcement learning folks use to convince other people that reinforcement learning is hard. OK? So the problem in walking is that you could do better if you stopped just trying to be fully actuated all the time. We start thinking about the future. Think about long term stability instead of trying to be fully actuated. OK? The hoppers, there are examples of really dynamically dexterous locomotion. But there's not general solutions to that. That's what this class has been trying to go for. So we do optimal control. We would love to have analytical approximations for optimal control for full humanoids like ASIMO. Love to have it. Don't have it. We're not even close. You know the tools that we have now. But even if we did have an analytical approximation of optimal control-- maybe we will in a few years, who knows-- we'd still like to have learning. Right? All this model free stuff is still valuable because, if the world changes, you'd like to adapt. Right? So my thesis was basically about trying to show that I could do online optimization on a real system in real time. And I told you about Andrews Helicopters. There's a lot of work on Sony Dogs that do loop trajectory optimization from trial and error. So Sony came out. And they had this sort of walking gait. Right? And then people start using them for soccer. And they said, how fast can we make this thing go? It turns out the fastest thing they do on an IBO is to make it walk on its knees like this. And they found that from a policy gradient search where they basically made the dog walk back and forth between sort of a pink cone and a blue cone, just back and forth all day long doing policy gradient. And they figured out this is a nice fast way to go. And then they won the soccer competition. [LAUGHTER] Not actually sure if that last part is true. I don't know who won. But I'd like to think it's true. There are people that do a lot of walking robots. I think I showed you the UNH bipeds that were some of the first learning bipeds. Right? I told you about these all term. Right? So there's large continuously in action spaces, complex dynamics. We want to minimize the number of trials. The dynamics are tough for walking. Because of the collisions. And there's this delayed reward. So in my thesis, the thing I did was tried to build a robot that learned well. That was my goal. I simultaneously designed a good learning system but also built a robot where learning would work really well. Instead of working on ASIMO, I worked on this little dinky thing I call Toddler. Yeah? And I spent a lot of time on that little robot. So you know about passive walking. This is the simplest, this is the first passive walker I built. Passive walking 101 here. So it's sort of a funny story. I mean, I was in a neuroscience lab. I worked with the Leg Lab. But my advisor was in neuroscience. They spent lots of money on microscopes and lots of money. So at some point, I said, can I spend a little bit of money on a machine shop? And I promise it'll cost less than that lens you just spent on that one microscope? And so, he gave me a little bit of money to go down. I was basically in a closet at the end of the hall. My tools looked like things like this. Like, I couldn't even afford another piece of rubber when I cut off a corner. And that's actually a CD rack that I got rid of somewhere. And that's my little wooden ramp that I was using for passive walking. But I built these little passive walkers with a little sureline CNC mill that walked stably in 3D down a small ramp. Yeah? I don't know why it's playing badly. So that was the first steps. If we're going to do walking, it's not hard. Those feet are actually CNC-ed out. I spent a lot of time on those feet. They're a curvature that was designed carefully to get stability. STUDENT: It's just a simple [INAUDIBLE].. RUSS TEDRAKE: Yeah. Just a pin joint. That's a walking robot. At the time, people had been working on passive walkers for a long time. But nobody had sort of done the obvious thing, which is add a few motors and make it walk on the flat. Nobody had done it. So that's what I set out to do with the learning. Turns out a few people did it around the same time. So we wrote a paper together. But the basic story was we went from this simple thing that was passive to the actuated version. The hip joint here on this robot is still passive. OK? Put actuators in at the ankle. So we had a new degrees of freedom with actuators so that it could push off the ground but still keep its mostly passive gait. Actually, it's extruded stock here stacked with gyros and rate gyros and all the kinds of sensors. It's got a 700 megahertz Pentium in its belly, which kind of stung. In retrospect, I couldn't make very many efficiency arguments about the robot because it's carrying a computer the size of a desktop at the time. You know? And so, there's five batteries total on the system. Right? Those four are powering the computer. There's one little one in there that's powering the motors. And still those big four drained like 50% faster than the other ones. But it's computationally powerful. Right? I actually ran a little web server off there just because I thought it was funny. [LAUGHTER] And the arms look like I've added degrees of freedom. But actually, they're mechanically attached to the opposite leg. So when I move this, that bar across the front was making that coupling happen, which is important for the 3D walking. Because if you want to walk down, if you have no arms actually and you swing a big heavy foot, then you're going to get a big yaw moment. And the robots often walk like this and went off the side of the ramp. So you put the big batteries on the side and then everything walks straight. And it's good. So in total, there's nine degrees of freedom if you count all the things that could possibly move. And there's four motors to do the controls. So that's under-actuated. Right? We've got the robot dynamics. Oops. I've used a Mac now. I used to use a Windows. So apparently my u is now O hat. [LAUGHTER] Sorry. That's actually tau. OK. So tau. Yeah. So I had most almost the manipulator equations. But I had to go through this little hobby servo. So it wasn't quite the manipulator equation. And the goal was to find a control policy pi that was-- so it was already stable down a small ramp. And the way I formulated the problem is I wanted to take that same limit cycle that I could find experimentally down a ramp and make it so it worked on whatever slope. So make that return map dynamics invariant to slope. And to do that, you need to add energy. And you need to find a control policy. So my goal was to find this pi, stabilize the limit cycle solution that I saw downhill to make it work on any slope. So this was just showing that Toddler, with its computer turned off, its motors are turned on-- actually, this one is even the motors are off. And there's just little splints on the ankle. Just showing that it was also a passive walker. And showing that I dramatically improved my hardware experience by getting a little proform treadmill that was off of the back lot. And I painted it yellow and stuff. So this thing would actually walk all day long. It would. So it's a little trick. At the very edge, in the middle, there's nothing going on. But at the very edge of the treadmill, I put a little lip there. So if it happened to wander itself over to the side, it had that lip and walked back towards the middle. OK? And I put a little wedge on the front and on the back so it sort of would try to stay in the middle of the treadmill. And that thing would just walk all day long. It would drive you crazy hearing those footsteps all day long. [LAUGHTER] But it worked. It worked well. It still works today, most of the time. So I use the words policy gradient. But this was really an actor critic algorithm. So I used linear, it's actually a very centric grid in phi. But a linear function approximator. And the basic story was policy gradient. OK? So it was something in between this perfectly online at every dt, make an update. And it was not quite the episodic run a trial, stop, run a trial, stop. The cost function was really a long term cost. But I did it once per footstep. OK? So every time the robot literally took a footstep, I would make a small change to the policy parameters. See how well it walked. See where it hit the return map. And then change the parameters again. Change the parameters again. And every time that foot hit the ground, I would evaluate the change in walking performance and make the change in W based on that result. OK. I'll show you the algorithm that I used a second, which you'll now recognize. So the way to think about that sampling in W is that you're estimating the policy gradient. And you're performing online stochastic gradient descent. Right? So the time, the way I described the big challenge is, what is the cost function for walking? And how do you achieve fast provable convergence, despite noisy gradient estimates? You guys know about return maps. This is my picture of return maps from a long time ago. So this is the Van der Pol Oscillator. This is the return map here. The important point here, so this is the samples on the return map. This is the velocity at the n-th crossing versus the velocity at the n-th plus 1 crossing. The blue line is the line of slope one. So it's stable, the Van der Pol Oscillator, because it's above the line here and below the line there. And you can evaluate local stability by linearizing and taking the eigenvalues. We've talked about these things. But I don't know if I made the point nicely before. That if you can pick anything, if you want your return map to look like anything in the world, if you could pick, what would you pick? You'd pick a flat line. Right? That's the deadbeat controller. I used the word deadbeat. So that's where my cost function came from. The cost function that tried to say that the robot was walking well-- wow-- penalized my instantaneous cost function, penalized the square distance between my sample on the return map and the desired return map, which is that green line. OK. So basically I wanted, I tried to drive the system to have a deadbeat controller. And I did, and there's limits. There's actuator limits that's going to mean it's never going to get there. But my cost function was trying to force that. Every time I got a sample, it was trying to push that sample more towards the deadbeat controller. Then basically, it worked. It worked really well. The robot began walking in one minute, which means it started getting its foot cleared. So the first thing, if I set W equal to zero, it was configured so that when the policy parameters were zero, it was a passive walker. So I put it on flat. I picked it up. I picked it up a lot. And I drop it. It runs out of energy and stands still. Because it was just a passive walker, it's not getting energy from-- it's only losing energy. OK? So now, I pick it up. I drop it. And every time it takes a step, it's twiddling the parameters at the ankle a little bit. OK? So it started going like this a little bit. And then after about a minute of dropping it-- and quickly, I wrote a script that would kick it into place so I stopped dropping it-- OK. So I gave a little script so it would go like this. And in about one minute, it was sort of marching in place. OK? And then I started driving it around. I had a little joystick which said, I want your desired body to go like this. And it started walking around. And in about five minutes, it was sort of walking around. I'll show you the video here in a second. And then, I said 20 minutes for convergence. That was conservative. Most of the time, it was 10 minutes. It would converge to the policy that was locally optimal in this policy class. But it worked very well. And I just sort of sent it off down the hall. And it would walk. OK? And doing the stability analysis, it showed the learn controllers is considerably more stable than the controllers I designed by hand, which I spent a long time on those, too. And now, here's a really key point. OK? So you might ask, how much is this sort of approximate value function, how important is that? That's sort of the topic for today. Right? How important is this approximate value function? Well, it turns out, if I were to reset the policy, if I just set the policy parameters to zero again but keep the value function from the previous time it learned, then the whole thing speeds up dramatically. So instead of converging in 20 minutes, the thing converges in like two minutes. OK? So just by virtue of having a good value estimate there, learning goes dramatically faster. And it's only when I have to learn them both simultaneously that it takes more like 10 or 20 minutes. And it worked so fast that I never built a robot. I never built a model for the robot. Actually, I tried later. It's tough. The dynamics of that-- I mean, it's a curved foot with rubber on it, right? It was just very hard to model accurately. And I didn't need to. It worked. It learned very quickly. Quickly enough that it was adapting to the terrain as it walked. All right. So here's the Poincaré maps from that little Toddler robot projected onto a plane. So I picked it up a bunch of times. I tried to make it just walk in place here. Before learning, it was obviously only stable at the zero, zero fix point. It was running out of energy on every step and going to zero. After learning, this is what the return map looked like. OK? So it actually could start from stopped reliably. Right? This is actually far better than I expected it to do. If you do your little staircase analysis of this, so it gets up to the fixed point in two steps or three steps for most initial conditions. Right? And from a very large range of initial conditions, as large as I care to sample from. So you could go up there-- and people did actually. We had a little-- after we got it working, the press came. And then everybody was asking me, the reporters were saying, can I have my kid play with the robot? Or can we put on a treadmill at the gym? Rich Sutton put his fingers under it and was like playing with it at dips one time. So it got disturbed in every possible way. And for the most part, it worked really-- I mean, so if you give it a big push this way, it actually takes energy out and comes back and recovers in two steps. You stop it. It goes back up. And it recovers. And in the worst case, I had some demo to give or something. And I took it out of the case. It had traveled through the airport. The customs people always asked me if it had commercial value. It doesn't have commercial value. But it broke somewhere in the travel. And I didn't realize it. I picked it up and headed to do its demo. And it's going like this. And it's sort of walking. And it looks a little funny. And people are so relatively happy with it. Turns out the ankle had completely snapped. But in just a few steps, it actually found a policy that was walking with a broken ankle. [LAUGHTER] So it works. It really worked. It really did work. I'm not sure-- I mean, yeah. It really worked. OK. So here's the basic video. This was the beginning. I was paranoid. So I had pads on it to make sure it didn't fall down and break. This is the little policy that would kick it up into a random initial condition like that. And now it's learning. It falls down. I don't know why it's playing so badly. This is after a few minutes. It's stepping in place. It's walking. And then I started driving it around. I say, OK. Let's walk around. And it stumbles. But really, really fast, it learned a policy that could stabilize it. Right? And after a few minutes, this is the disturbance tests. I actually haven't shown these in a long time. It's really robust to those things. And then you can send it off down the hall. And now, this is a little robot with big feet admittedly. But you know, it's like the linoleum in E25-- this is in E25-- was really not flat. I mean, it's sort of embarrassing to tell people, look at the floor. It's not flat. But for that robot, I mean there's huge disturbances as it walked down the floor. But the policy parameters were changing quite a bit. You could walk off tile onto carpet. And in a few steps, it would adjust its parameters and keep on walking. This was it walking from E25 towards the Media Lab, if you recognize that. OK. So one of the things I said is that one of the problems with the value estimate is you make a small change in the value function, you get a big change in the policy. Theoretically, no problem. In practice, you don't probably want that. Right? One of the beautiful things about the policy gradient algorithms is you make a small change to the policy. It doesn't look like the robot's doing crazy things. So every time, everything you saw there, it was always learning. Right? Learning did not look like a big deviation from nominal behavior. I never turned off learning with this. Right? It turned out in the policy gradient setting, I could add such a small amount of noise to the policy parameters, which was a very central grid over the state space, such a small amount of noise that you couldn't even tell it was learning. Right? But it was enough to pull out a gradient estimate and keep going. So it didn't look like it was trying random things. But then, if it walked off on the carpet and did a bad thing, it would still adapt. That was something I didn't expect. It just was a very nice sort of match between the amount of noise you had to add and the speed of learning. The value estimate was a low dimensional approximation of the value function. Very low. Like ridiculously low. One dimension. Right? But it was sufficient to decrease the variance and allow fast convergence. I never got it to work before I put a value function in. And here's this question. So I ended up choosing gamma to be pretty low. Gamma was 0.2. I did try with zero times. What did that mean? So that's how far I carried back my eligibility, which means how many steps am I looking at it. So that you could think of it as a receding horizon optimal control. How many steps ahead do you look? Right. Except it's discounted. OK? So 0.2 is really heavy discounted. Really, really heavy. It means I was basically looking one step ahead and not worrying about things well into the future, which made my learning faster but meant I didn't take really aggressive corrections that were multi-step sort of corrections. Only very rarely, if the cost really warranted it. OK. So that was always something I thought would be cool if I could get that higher and show a reason why multi-step corrections made it a lot more stable. STUDENT: Did it not work as well? RUSS TEDRAKE: It didn't learn as fast. At some point, I decided I'm going to try to make the point that these things can really learn fast. And so, I started turning all the knobs. Simple policy, simple value function, low look ahead. And it worked. But it was fast. STUDENT: Is gamma used [INAUDIBLE] the same as lambda? RUSS TEDRAKE: It's a gamma in a discounted reward formulation. STUDENT: So there is no eligibility trace? RUSS TEDRAKE: The eligibility trace for the reinforce in a discounted problem is the same as the discount factor. So in my lab now, we're doing a lot of these model based things. We're doing LQR trees. We're doing a lot of things. In fact, the linear controls are working so beautifully in simulation that Rick Corey, one of our guys, started joshing me. He's like, why didn't you just do LQR on Toddler? And he was giving me a hard time for a long time. Now he's asking about model free methods again because it's really hard to get a good model of very underactuated systems. I mean, the plane that I'll tell you about more on Thursday, our perching plane we've seen quickly, is one actuator. And depending on how you count the elevator, eight degrees of freedom roughly. And sorry, eight state variables. And it's just very, very hard to build a good model for that that's accurate for the long trajectory, the trajectory all the way to the perch such that LQR could just stabilize it. We're trying. But there's something sort of beautiful about these things that just work without building a perfect model. OK? The big picture is roughly the class you saw. This is actually, I had forgotten about this. This was one of my backup slides from before. But this is the basic learning plot, which is just one average run here. If I reset the learning parameters, how quickly would it minimize the average one step error? And it was pretty fast. And then actually, that's a lot of steps. That's more than I remember. But this takes steps once a second. And so, in a handful of minutes, it does hundreds of steps. OK. And this is the policy in two dimensions that it learned. So if you think about a theta role and theta role dot, I don't know if you have intuition about this, but the sort of yin and yang of the Toddler was that you wanted to push when you're in this side of the face portrait and push with this foot when you're on this side of the face portrait. I did things like I mirrored, the left ankle was doing the inverse, the mirror of the right ankle. Right? So everything I could do to try to minimize the size of the function I was learning. And that's actually sort of a beautiful picture of how it needed to push in order to stabilize and skate. Any questions about that? All right. So that's one success story from model free learning on real robots. It learns in a few minutes. There's other success stories. I'll try to talk about more of them on Thursday. But at this point, I've basically given you all the tools that we talk about in research to make these robots tick. Their state estimation, I didn't talk about. There's Morse's idea that we didn't talk about. But this is, I've given you a pretty big swath of algorithms here. So really I want to now hear from you next week. And I want to give you a few more case studies so you feel that these things actually work in practice. And you can go off and you use them in your research. Yeah, John? STUDENT: If there is a lot of stuff that's been published and a lot of interest [INAUDIBLE] stochasticity, then it would make sense to have a large gamma [INAUDIBLE].. Right? There'd be no reason, it would be a faulty way of trying to interpret that data. Right? RUSS TEDRAKE: Yeah. I mean, I think that so Katie's stuff, the metastability stuff, argued that for most of these walking systems, it doesn't make sense to look very far in the future anyways. Because the dynamics of the system mix with the stochasticity, which I think is the same thing you just said. Yeah. Yeah. STUDENT: The general dimensions of the robot [INAUDIBLE] when you're designing that robot, thinking about this model free learning when you started? [INAUDIBLE] helps it be a little more stable. RUSS TEDRAKE: Good. So I'm glad you asked that. So it's definitely very stable, which was experimentally convenient. Right? Because I didn't have to pick it up as much. But it actually learns fine when it starts off unstable. So the way I tested that is, if the ramp was very steep, then it starts oscillating and falls off sideways. So just to show that it can stabilize and unstabilize. It's like, oh, the same cost function. It's absolutely no different. I showed that it stabilized that. And it just meant I had to pick it up when it fell down a bunch of times. But the same algorithm works for that. So it's not really the stability that I was counting on. That was just experimentally nice. The big clown feet and everything were because that's how I knew how to tune the passive gait. Right? In the passive walkers we work on these days, you always see point feet. Because I care about rough terrain now. And those clown feet are not good for rough terrain. So we could try to get rid of that. STUDENT: You're saying if you wanted to scale that out, you had mentioned the [INAUDIBLE] robots [INAUDIBLE] would you have the same success [INAUDIBLE]?? RUSS TEDRAKE: Got you. I think it's fine. I think that it would look ridiculous that big maybe. And I wouldn't scale the feet quite that big. Right? That would be ridiculous. But I don't think there's any scaling issues there really. It's the inertia of the relative links that matters. And I think you can scale that properly. At some point you're going to just look ridiculous if you don't have knees and you're that big. So yeah. Energetically, the mechanical cost of transport, if you just look at the power coming out of the batteries-- sorry, actually the work done by the actuators, the actual work done by the actuators. It was comparable to a human, 20 times better than ASIMO. But if you plot the current coming out of the batteries, it was three times worse than ASIMO or something like that. Because it's got these little itty bitty steps and really big computer. And that was, in retrospect, maybe not the best decision. Although I never had to worry about computation. I never had to optimize my algorithms to run on a small embedded chip. STUDENT: Can you talk a little bit about the [INAUDIBLE]?? RUSS TEDRAKE: You can actually see it here. So this is the barycentric policy space that were the parameters. Yeah. So it was tiled over 0.5, 0.5 roughly. And you could see the density of the tiling there. Yeah. And that was trained. So there was no generalization. So the fact that those looked like sort of consistent blobs was just from experience and eligibility traces carrying through. But those are not constrained by the function approximator to be similar more than one block away. There's literally a barycentric grid there. And then the value estimate was theta equals zero. The different theta dots. It was just the same size tiles. But a line just straight up the middle. STUDENT: So your joystick would just change theta? Or not the theta? But it would just change the position. RUSS TEDRAKE: The joystick was, so the policy was mostly for the side to side angles, which would give me limit cycle stability. And then I could just joystick control the front to back angles. So this thing, we could just lean it forward. It starts walking forward. Even uphill. That's fine. You lean back, it starts walking back. It was really basically this. Yeah. If you want it to turn, you've got to go like this. And it would do its thing. Right? So that was it. It wasn't sort of highly maneuverable. Yeah. STUDENT: It seems like there are some [INAUDIBLE] to step to step, having each step be like-- RUSS TEDRAKE: A trial. STUDENT: So a section on your Poincaré map. RUSS TEDRAKE: Yeah. STUDENT: I don't know if that would work for flapping. RUSS TEDRAKE: Absolutely. STUDENT: If [INAUDIBLE] up or down is a similar kind of thing. RUSS TEDRAKE: I think it would. We were thinking about it that way. So you're absolutely right. So it was nice to be able to, it was very important to be able to add noise by sort of making a persistent change in my policy. So this whole function, adding noise meant this whole function would change a little bit. And then I would stay constant for that whole run. And then change a little bit. If you add noise every DT, for instance, then you have to worry about it filtering out with motors and stuff. This was actually a very convenient discretization in time on the point grade map. Yeah. So I think that was one of the keys to success. John? STUDENT: The actuators you took, were they pushing off the sort of stance foot? [INTERPOSING VOICES] RUSS TEDRAKE: Or pulling it back up. But yes. STUDENT: So you just actuated the stance foot. That was the actuator [INAUDIBLE].. RUSS TEDRAKE: The units were, I guess they were scaled out. I did actually do the kinematics of the link. So it was literally a linear command in-- those are probably meters or something in the-- no. It's way too big. [INTERPOSING VOICES] STUDENT: Touching down, but touch down at the same angle? RUSS TEDRAKE: No. The swing foot was also being controlled. So it would get a big penalty actually if it was at a weird angle when it touched down. It would hit and it would lose all its energy. But that was free to make that mistake. STUDENT: So you have two actions then? The address to [INAUDIBLE]? RUSS TEDRAKE: No. Well, it's one action. But the policy is being run on two different actuators at the same time. So one of them is over in this side of the state space. And the other one's over in the side of the state space at the same time. STUDENT: OK. So it just used different data. But they're-- OK. RUSS TEDRAKE: Yeah. So just it was learning on both of those sides at the same time. I'm a big fan of simplicity. It's easy to make things that work. I mean, I think it's a good way to get things working. So that's what the test will be as we go forward in how complex we can make these things. But in sort of the simple case, they work really well. Great. OK. So thanks for putting up with the randomized algorithm. We'll see you on Thursday. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_5_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, welcome back. Sorry for the technical blip there. OK, so I guess lecture two. I challenged you. We talked about the phase space of the simple pendulum, and I challenged you to come up with a simple algorithm. I guess I didn't say simple, but I challenged you to come up with an algorithm to try to, in some sort of minimal way, change the phase plot of this system so that the fixed points that used to be unstable become stable and vise versa. So today we're going to do that. I don't know if anybody do that for fun? Yeah, OK. [LAUGHTER] OK, so today we're going to do that. So yeah, the question is, can we use optimal control now, numerical optimal control, to reshape these dynamics, OK. And I want to start by doing sort of an evil thing but something that's going to make thinking about it a lot easier. We're going to discretize everything, OK. So let's start by-- we're going to discretize state, actions, and time, OK. So I'm actually going to take my vector of x, which lived on the real numbers, and start thinking about integer number of states. I'll say what I mean by that. OK. And I'm going to take my actions, my continuous action space, which I've been thinking of as u, and I'm going to turn that into a discrete state space, a discrete action space. And I'm going to take time and turn it into some integer, discrete time, OK. So and I'm going to try to be-- throughout the lectures, throughout the notes, I tried to be very, very careful to use X and U and time for continuous things and S for states, A for actions, N for discrete things. So we might find ourselves in situations where we have continuous state and discrete actions or some other combination, but that should be a code. OK, so if we want to-- if we're willing to discretize state and time, then maybe one way to think about that on this picture is by thinking of every one of these-- this was my quick cartoon of the phase plot of the simple pendulum. Let's think about identifying each one of these possible states in the phase portrait as a particular state, OK. These little nodes, possible states we can live in. And through actions, we can transition to different states, if you see what I'm doing without drawing 100,000 circles here. So let's tile the state space with discrete states. You could also think of it as drawing a grid and calling each box in the grid a state. And what that allows us to do-- we're also discretizing actions, so we have a finite number of possible options coming out of each state. It allows us to turn the continuous time optimal control problem into a simple graph search problem, OK. Graph search, we know how to do well. We're really good at that in computer science. OK, so let's see how far we can get first by just thinking about this very non-linear, very dynamic thing on a graph search, OK. So we're going to do numerical optimal control. This is-- in particular, when people talk about the dynamic programming algorithm, they're often talking about discretizing state and actions. And we're going to use the standard optimal control formulation. I'm going to start with a finite horizon and say that my cost of being in state x, time t is h of x at the final time. All right, this is the continuous time optimal control. And I'm going to start thinking of that now as being in state S at integer time N and having me be at some final cost on S plus a sum from N equals 0 to N of g SA, OK. And my dynamics now are going to be of the form S-- maybe I should even write more explicitly, S N plus 1 is a function of SN, AN, OK. OK, so again, dynamic programming exploits the fact that you can write this in a recursive form. So if I want to find the optimal cost to go, which I'll call J star, at the final time, it's just h of S, right. And going backwards in time, this is just going to be the min over a of g S, a plus h of S prime, where S prime is. Right? I'll get one-- if N is-- N minus 1, I get one of these, and then I get the final cost, OK. And going backwards, we have this recursive form, which is min over a g S, a plus the cost to go from S prime and n plus 1 using that same S prime. OK, I want to make sure you see why that is, why this-- this is magical, right? The fact that I can summarize my optimal cost to go by doing a min over a single action, that's really magical. Just to make that extremely clear, think about J star at N minus 2, let's say. So I have to minimize over two actions. I have to minimize over, let's say I'll call them a1 and a2. I have two steps left to go. So I have to minimize S at a1 plus g of S prime, let's call it, a2 plus h of S double prime. That's my minimization that I'm trying to solve in order to find the optimal cost to go, where S prime is f of S, a. S double prime is f of S prime. This is a1, and this is a2. I'm just expanding this sum for the last two g's. And the cool thing is that, because of this additive form of g, this term doesn't depend at all on my decision a2. I'm given a current state S, and I have to decide my action a1. Nothing about this term depends at all on a2, OK. In contrast, this one does depend on a1, because S prime depends on a1. This one depends on a1 and a2. This one certainly depends on a2. You see what I'm saying? So I can rewrite this as min over a1 f of S a1 plus min over a2 g of S prime a2 plus h of S double prime. I could just move that min inside to the only terms that matter. This is intended to be a moment of clarity, and I don't see a clarity on your faces. Does that make sense, that this doesn't depend on a2? I know I'm going to- a1 is my action at time N minus 2. a2 is my action at N minus 1. The action I take next time has absolutely no effect on my current state or my current action. So the great thing is this here is just-- this whole term right here is just J star of S prime at, I'm calling it, N minus 1 here. So it's really the fact that we're taking this min over this additive form that allows us to write the recursive statement like this that says, the best thing I can do with additive cost and all these things is to, in a single step, take the action which minimizes my one step cost combined with the cost I'm going to get from being in the state I transition to for the rest of time. It's a magical thing. At whatever time I'm at, I only have to think one action ahead if I've already got my J star computed, OK. Simultaneously, it's saying that I can compute the optimal cost to go. I could compute the optimal-- I know exactly how much cost I'm going to incur from any state, given I follow the optimal policy, if I just work backwards in time. And when I'm in time N minus 1, I don't have to think about the actions I was going to take beforehand. As long as I know what state I'm in, because that state encompasses every action I've taken in the past, that state contains all the information, all I have to think about is the last action I'm going to take to decide my optimal policy one step from the end of time, OK. So the fact that you can solve these things backwards in time, that's the principle of optimality, OK. Ask questions if you don't like what I said. I think that the graphics that are about to come are going to make things clear, too. OK, so what does that mean? What are the implications of that? All right, for the additive costs, I can compute J star recursively from the end of time, which, in this case, is N back to 0. And the optimal action, the optimal policy, which I then want to call pi star, which could in general depend on the time, is just argmin over a. It's the action which minimizes that same expression. So I can compute J star recursively backwards in time, and if I know J star, then I essentially know my optimal policy. I know the best action, OK. So but for this reason, the fact that the cost to go, the cost I expect to incur given I'm in state S and I'm running from time N, the cost to go becomes a very central construct in optimal control. All right, so part of the goal for today is to give you some more intuition about J star, OK, because it's actually a very intuitive thing, but you can be lost, I think, in the equations. So let's give you more intuition about that. I'm going to do that by getting a little bit more abstract, well, simultaneously abstract and concrete. AUDIENCE: [INAUDIBLE] PROFESSOR: Because it's finite horizon. AUDIENCE: You know that the reward function is dependent on time. PROFESSOR: I haven't included that. You can make the reward function depend on time. But even if the reward function, or cost function in my world, is-- there's a difference between optimal control people and reinforcement learning people. The optimal control people are pessimists. Everything's a cost. And the reward reinforcement learning people give rewards out. So I guess I'm a pessimist. So yeah, so my cost is actually not a function of time. I could have made it that. But because there's a finite horizon time, that means my policy and my cost to go function still depends on time. Because if time ends in one step, I'm going to do something different than if time ends arbitrarily far in the future. OK. So we're going to-- my goal here is to get intuition about cost to go and dynamic programming, which I'm often going to call DP, OK. And I'm going to do it with the grid world example. This is right out of the reinforcement learning books. OK, so in that pendulum phase plot, I discretized the state space, and I started talking about transitions between states, OK. I can make that even more transparent by saying, OK, now you're a trashcan robot in a room. You're going to be in one of these tiles. You're on one of these blocks, so there's a finite, discrete state space, OK. I won't draw a trashcan robot, but let's say I'm here. And when you're here, you have five discrete actions you can take. You can move up, you can move right, down, left, or you can sit still. OK. And discrete states and discrete time. Every time you take an action, in the next time step, you'll be in the next grid box. OK. Let's say I've got a goal state somewhere in the world. Well, we can formulate plenty of good optimal control problems to get us to that goal state. So plenty of good cost to go functions in the additive form-- let's say I want to do minimum-- I want to get there in the minimum time. Well, then I can just set g of S, a to be-- to actually have it in units of time, I should put a 1 if S is not at the goal and 0 if S is in the goal, OK. And I don't actually care about actions. I have five discrete actions I can pick from whenever I'm in a state. If I'm not at the goal, I'm going to incur a cost of 1. So it's in my best interest as a trashcan robot to get to the goal. If I'm minimizing that cost, I'm going to get the goal as fast as possible. And actually, the units, the cost to go will tell me the number of steps to get there. AUDIENCE: [INAUDIBLE] PROFESSOR: Right. So I'm going to do that graphically. But let's say there's a finite horizon now, but this is how I'm going to get to infinite horizon, so. And let's say that h of S is just 0. I don't really care where I am at the end of time. Or I could have h of S be this same function. That would be fine, too. OK. How's is it going to look? What is J-- well, let's be specific about h. Let's make h actually be the same as g here. So I'll say it's g S with the 0 action. So since this doesn't depend on actions, it doesn't matter. Let's say h is the same function as g there. So what does my cost to go look like at time N? My optimal cost to go given I'm in some state, and it's time N. This is a function over S, and I'm time N. And what is that function? AUDIENCE: g. PROFESSOR: Yeah. Well, if I'm not in the goal, it's that. It's the same as g, or h in this case. OK. What does g star of S N minus 1 look like? Now I have time to take one action, OK. So-- AUDIENCE: One step away from the goal is 1. If you're on the goal, it's 0, but anywhere else, it would just be 1. PROFESSOR: Awesome. Right? If I'm on the goal, I can do nothing, incur zero cost to go. So the best thing for me to do if I'm on the goal is to stay there, OK. If I'm a long way from the goal, then I'm not going to get to the goal in two steps, so I'm going to incur two units of cost. I'll say loosely far from goal. And then there's this in-between place, which is if I'm one step away from the goal, I can take the right action and get there and incur only one unit of cost. All right, what's it going to be-- what's J S N minus 2 going to be? It's going to be 3, 2, or 1, depending on how closely-- if I'm near the goal, I've got a chance of getting to the goal and stopping this insane adding cost. Stop the madness. Get to the goal. Otherwise, I'm going to just incur the cost no matter what I do, OK. So what's the optimal policy? If I'm on the goal, what's the best-- the best action to take is to sit still. If I'm one step away from the goal, the best thing to do is to move to the goal, whether it's up, down, left, or right. What if I'm out here? What's the best thing for me to do? Doesn't matter at all. I can do anything I want. I'm still going to incur the cost, so you might as well just choose your policy at random, OK. So optimal policies aren't necessarily unique. Sometimes multiple actions are equally optimal. OK, here's your world. I have put the goal always at 2,3, just randomly, OK. You are a blue star. The goal is a red asterisk. It's a-- take you back to the '80s or something, video games. OK. So let's just very simply-- I'm going to run this value iteration algorithm on it, OK, and I'm going to plot, at every step of the algorithm, the cost to go, OK, and the policy, actually. So it's not going to be-- I have my more general value iteration code that's not going to be quite as beautiful, but-- [TYPING] OK. Well, that went pretty fast. There was supposed to be pause there. Let me get that-- add a pause in there quick, but-- OK. Here is J at time-- at J at capital N. My cost function is 0 if I'm at the goal, 1 everywhere else, OK. My policy, it doesn't matter what I choose. I've actually chosen to do-- I didn't put this-- I didn't give you a key, but 0 is the do nothing action, OK. So this just has do nothing everywhere. This is the lazy policy, I guess. And the cost it's going to get is it's going to get no cost if it's at the goal, one cost if it's everywhere else. OK, if I'm now computing J S N minus 1, you guys told me what that is. That says it's 0 here, it's 1 here, it's 2 everywhere else, right. And the co-- now you can see my key here. Orange must mean move down, red must mean move to the left, green must mean move to the right, and so on, OK. The value-- this backwards propagation, this dynamic programming propagation is a very beautiful and intuitive thing, OK. Every time I take a step, a few more states become reachable. In that amount of time, I can get to the goal. The resulting cost to go function is simple. It's just the distance, the number of cells from the goal, yeah. And the policy, again, it's not unique. But this one, just because of the ordering I chose, and I just do a min over the actions, says it's always going to move down in that orange area, it's always going to move up in the blue area, and it's just going to-- so that's one of the optimal policies, all right. Now Alborz asked a good question, what's my horizon time? So I'm actually just working backwards from some arbitrary capital N and just going backwards in time further and further. But it turns out for this problem, and for many problems, everything converges, OK. After some amount of time, the optimal cost to go stops changing, and I know that's my optimal policy. Walk down. And this is too simple. This is painfully simple. But I think that intuition is going to take us a long way with the value methods, OK. AUDIENCE: So, Professor? PROFESSOR: Yeah. AUDIENCE: In this example, the optimal policy is not unique. PROFESSOR: The optimal policy is not unique. The guy could have just as well gone left first and then down. So how does that manifest itself in those equations? There's multiple min over a's. There's multiple a's that give me the same J star S and N minus-- or plus 1, whatever. Multiple actions give me the same long-term cost, so I could equally pick any of them, yeah? OK, to make a more careful analogy to the more continuous world, that was a perfectly good minimum time problem. I could have equally well chosen a different cost function. Oh wait, let's put the obstacles back in, all right. So the cool thing is obstacles aren't going to make it any harder for us to solve this problem in our head. It's a nice observation that they don't actually make it any harder for the algorithm to solve it either. And that's a general principle. That's something I definitely want you to get out of this course, is that when we're doing analytical optimal control, every piece you add to the dynamics makes things cripplingly difficult. And so you have to stay with these very simple dynamical systems. OK, the computational algorithms are actually pretty insensitive to how complex the dynamics are. They're going to break down in a different way, OK. So there's these different tools for different-- that are good for different problems. And there's a lot of problems which are very amenable to these computational tools that people aren't-- I mean, you can solve brand new problems pretty easily with some of these algorithms. OK, so let's think of another cost function. Let's do the equivalent of a quadratic regulator. I just had that whole spiel and forgot to run the boundary-- the obstacles together in Soapbox. OK, so now I'm just going to put in some obstacle. And if you see-- whoops, sorry. If my state-- OK, so I promised to use S and a in my notes and on the board, but I guess I didn't do it in my code. Sorry. So x equals the goal, then the cost to go is-- the cost, instantaneous cost, is 0. Otherwise it's 1. If there's an obstacle, I just give it a high cost of 10. So if I put that obstacle function in there, then I've got my same 0 cost for the goal. I've got a 1 cost almost everywhere, but I've got a 10 there. That's my cost function. And as I backup, a couple of things happened. First, this thing quickly figures out how to get off that obstacle as fast as possible and decides not to go there anymore. And then as you back up the cost function, the colors are a little more muted because I have this high color here. But the same basic algorithm plays out until it covers the space. And my s-- oh, that was a-- [LAUGHTER] --lucky initial condition. OK, good. Now he has to go around. Wow. OK, so adding an obstacle in the grid world is clearly trivial. It's nice to think that adding an obstacle when I get back to the pendulum would be trivial, because that's not trivial for most of your other control derivations. OK, so minimum-- the quadratic regulator now. Now here, the cost I want is x of u, in the continuous world is some x minus x goal transpose Q x minus x goal. And you have to map that down into the integer world, the states. There's not a particularly clean way to write that, so I'm just going to allow you to imagine that it's trivial to code. Imagine that transition. OK, now my cost function is just penalizing me for being away from the goal. But it's not a 0 and 1. It's penalizing me more smoothly for being away from the goal. So what's the best thing to do? The best thing to do is still to get to the goal as quickly as possible. It actually doesn't really change the optimal policy here, but it's a more smooth cost function, which, in some problems, gives you nice properties. It turns out the optimal policy is more unique in this case. But that would have been an optimal for the minimum time problem, too. And it converges nicely and goes to the goal in the same way, and works fine with the obstacle, of course. OK? Good. So now you have a little bit more intuition to work with on these cost to go functions. A couple of important things happened there that I want to highlight. First of all, I really want you to think in terms of cost to go functions. They're really intuitive. The cost that I will obtain till the end of time, the optimal cost to go says if I'm acting optimally, this is the cost I'm going to incur. And the optimal cost to go gives me the optimal policy, OK. And just to calibrate you here, J star is called the optimal cost to go, but it's also sometimes called a value function, optimal value function. A bunch of different communities talk about the same things with different words. These are the optimists. These are the pessimists. OK, the other thing that we saw is that for many problems, the limit as N goes to negative infinity-- I know that's a silly thing to say, I guess, but-- that a lot of times this thing actually goes to some well posed J star. It doesn't have to. Sometimes it blows up. Another way to think of this is that I said J S of N is S of capital N. It's the limit of this as capital N goes to infinity, if you think of it in the forward way. So in order for this thing to converge to some nice solution, this sum had better converge in the limit. For my choice of g for the minimum time problem, and for the quadratic regulator, both of these had the property that when you get to the goal, you stop incurring cost. So that integral-- as long as you can get to the goal, that integral-- the sum, sorry, is going to converge. If I had chosen that I give a cost of 1 when I'm at the goal and 2 when I'm anywhere else, then it wouldn't have converged. The cost to go would have gone to that same shape, but then that shape would have just kept increasing every time I go farther back in time. That whole function would just move up by one every increment of time, OK. But for a lot of problems, we do have this nice limiting behavior, OK, and that gives rise to the infinite horizon problems. So so far, I had talked about finite horizon, but a lot of time, a lot of problems we write as infinite horizon. OK. When your problems are infinite horizon, J and J star don't depend on time anymore. And the optimal policy doesn't depend on time. So J star and pi, all these things are just functions of S, not of time, OK. And for these to be well posed, that sum had better converge. Now just to say it, but not to dwell on it, a lot of people do write other formulations that handle that. For instance, a lot of people do discounting. A lot of people like to solve problems of this form, OK, just to make it more likely that that sum's going to converge, for instance. And there's some problems which really do have discounting. Yeah. AUDIENCE: So that's less than 1 [INAUDIBLE].. PROFESSOR: Yes, thank you. Good. Good call. Thank you. OK, so you know the basic dynamic programming equations, no? Let me just say one word about implementation, if you want to go home and make your own '80s graphics game in Matlab. For discrete states, discrete actions, J, even J star of S at some N, it's a vector. Typically I think of it as sort of a dimension of S by one vector. And dimension isn't the right word. This is-- so the cardinality of S, let's say, something like that, a big S, the number of possible states by one vector. And it's very practical to write that recursion for all states as a vector equation. So if I think of J star as being a vector, I have to do a min over a of g S, a. But g is another vector which depends on a. It's an S by 1 plus-- I can write it as a vector equation where this is a vector. This is a matrix. This is the transition matrix. And this is my vector again. And then transition matrix is just 1 if f iA equals J and 0 otherwise. OK. That's just a standard graph notation. So it's trivial to code these things in Matlab with just a bunch of matrix manipulations. OK, we understand everything about the grid world. I think it is a very helpful example, actually. Now let's think about the more continuous problems that I care about. What if, instead of having the dynamics of this moving left, right, whatever, my dynamics, my transitions came from my equations of motion from one of the systems we care about? So let's think about the double integrator. q double dot equals u. Let's do the min time problem. I can use the same minimum time cost function I did before, OK. [TYPING] OK. This one, I didn't leave the pause in there, but look what happens. Oops, sorry. Meant to do that. Make it bigger. I pop the same-- let me turn the lights down. I pop that same exact set of equations. I run the same value iteration algorithm, dynamic programming algorithm. I should have said, people tend to call it value iteration for when you take the infinite horizon version and dynamic programming if you call it-- if you do the finite horizon, but they're exactly the same thing, OK. So I might accidentally say value iteration because I'm used to it. OK, so I took my double integrator dynamics. I discretized my space. I made my cost function exactly the same as the minimum time cost function I used in the grid world, where there's a 0 cost of being at the goal and 1 everywhere else. And look what pops out. This is the cost to go function, is a function of state, and that's the policy. Remind you of anything? Right? Now I've got a big disclaimer that goes at the end of the lecture, but for now, let's just say that that's the perfect solution. The discretization is going to make this thing a little bit wrong. I'm going to say a few things about that at the end of the class. But the cool thing is that I pop my cost function in. I pop my continuous dynamical system. It's discretized. [CLICK] Run dynamic programming. As I back it up, it converges to some-- as N goes back, it does converge for this. It was the minimum time problem. And I get my optimal policy out, which is a bang bang policy, which is decelerate when you're at the bottom, accelerate when you're at that top. And that switching surface shows up in green just because it's interpolated. But when you know it, that's what we know about bang bang controllers, OK. Yeah. AUDIENCE: Did you have to encode that your only three actions were full forward, full backward, and-- PROFESSOR: The minimum over a is always going to choose the rails. In fact, in this implementation, they could have chosen in between things, and that's what it did right on the switching surface because of some-- it chose 0. AUDIENCE: OK, so you left the general just as-- PROFESSOR: I left it general. Yeah. So always, when I discretize the state and I discretize the actions of these continuous problems, I'm left with a finite set of states, a finite set of actions. So it can't pick unbounded. It's fundamentally bounded in actions that it can choose, and it chose those bounds. AUDIENCE: [INAUDIBLE] PROFESSOR: Say it again. AUDIENCE: How do you define the transition model? PROFESSOR: Good. I'm going to say some words about that in a minute, too, OK. Yeah. But not yet. Just give me a minute. OK, let's say we want to solve the LQR problem, the quadratic regulator cost for this. [TYPING] So I animated the brick for you just to keep it exciting. OK, so what pops out? This beautiful quadratic cost to go function, OK. Now this is a little bit off. It's supposed to be a linear function. It almost is, but there's some saturation because of my actuator limits, OK. But within the resolution of sort of my discrete actions, that's what we expected, OK. So I can do this for the brick. I'm going to tell you the caveats again in a minute, and I'm going to tell you the interpolation in a minute. But first I just want to help you realize that this-- we can pop these equations in if we're willing to discretize the state and action space. Even for pretty hard problems, I can just [CLICK] let it go. It's pretty fast, too, actually. OK. So now why not-- analytically, we had a hard time doing the pendulum, those nonlinear equations, OK. But if we tile the space, turn it into a graph, then I can run the exact same algorithm on the simple pendulum, OK. So let's do that. [TYPING] What am I going to get here? So minimum time for the simple pendulum. I've got my pause back in here. It's hard to see, but there's actually-- it's 1 everywhere except for 0 at the goal, which is the-- now I'm in phase space, so that's pi at 0. That's my unsteady fixed point, OK. I've got a blue 0 there, 1 everywhere else. At the end of time, my action is just do nothing, because there's no benefit to doing anything. And as I back up in time, this will give you a key to what-- you can see a little bit about my interpolation as I do this. OK, then it starts giving me incentive to move. Again, when you can't get to the goal, that's actually just noise there. But this thing quickly figures out-- oops. Let me do the same thing and let it not plot every time. Figures out a cost to go function, the optimal cost to go function, and an optimal policy. Now it looks a little noisy there. Again, we're going to talk about the sensitivity to discretization. But this is very much a bang bang policy, with the blue area being, do one action, the red area doing the other action. The switching surface is actually pretty complicated. It's some complicated function of state, but it gets this beautifully smooth cost to go function, OK. Now let's take a second and look at the phase plots here. Let me actually do it in order here. So this is the phase plot of the damped passive pendulum, OK, the original one we thought about in class. I just drew a few lines to help you. So if I start at downright position with a little bit of velocity, I'd slow down and stop. If I start near an unstable fixed point with near 0 velocity, then I actually fall down and go like this and end up standing still near the closest unstable fixed point, OK. Now if I do my feedback linearization invert gravity controller to stabilize the fixed point, then what's the phase plot going to look like? It's going to look just like this, but it's going to be moved over there, right? So let's make sure that's true. Ah, what did I call it? Invert gravity. OK, yeah. So I see the exact same things. Used to be my stable fixed point are now going over to the closest unstable one. This works great. The only objection to it is it required an enormous amount of torque to just pretend like you're inverted gravity. OK, so what's the minimum time solution going to look like? AUDIENCE: It's going to depend on what your torque constraint is. PROFESSOR: It's going to depend on my torque constraint is, yeah. So for whatever torque constraint I have now, you could even figure out the units here. My torque constraint was chosen to be something like half of the stall torque required to hold out like this. Then let's see what happens. This is the minimum time solution, which is exactly right. If I had more torque to give, it could have gotten out there quicker. And this added enough that, after going around once, it could get up to the top, OK. Let me see. Why is it not drawing anymore? I've got this [INAUDIBLE]. Oop. So that was-- that's a random initial condition. So from the one I had shown, it took one pump. That one took two pumps, and that gets it to the top. OK, but now, remember, my original challenge was to not just get to the top in minimum time. This is minimum time with bounded torque, so that's a little bit more satisfying. I don't want to pump in more torque than I could possibly implement. But what if I want to be sensitive about the torque? I want to get to the top, but I don't want to use a bunch of energy. OK, now the quadratic cost function makes a lot of sense, OK. So I'm going to put a quadratic cost on being away from the top and a big quadratic cost on using actions. So that'll give me some sense of minimally stabilizing the top, OK. What's that one going to look like? Would you expect it to look like-- phase plot going. AUDIENCE: Basically in phase space, it will more turns to get up there on the top [INAUDIBLE].. PROFESSOR: OK. What about if it's near the top? Is there going to look like a damp pendulum at the top? What's it going to do? AUDIENCE: Well, if it's headed the wrong way near the top, it will probably swing all the way around. PROFESSOR: Good. Right. AUDIENCE: But if you put too much cost on distance, it might end up quickest on the [INAUDIBLE].. PROFESSOR: Perfect, OK. So let's switch this to be my quadratic regulator cost. Right, so that's what you said. Took more pumps to get up. And if you plot the phase plot from a couple of these different places-- oh. Crap, sorry. I thought I picked initial conditions that were far enough to show you that. This is what you said, OK. This one happens to be close enough that it got to the top. This one took a lot of pumps and got out there. But the point I was trying to illustrate-- I guess I need to either penalize torque a little bit more or-- I never change things by a factor of 2. It's too slow. Oh, I made it not move. [LAUGHTER] Sorry. But it showed my point, OK. So yeah, it has no incentive to move from the bottom. It says, I'm going to incur more cost by moving than by getting close to the goal. Not getting close. OK, but up at the top, it is able to-- given it was near the top with some velocity, with a little effort, it's worth going around and stabilizing itself at the top. Yeah? OK. Good. AUDIENCE: If you iterate it far enough, it should go at the top, but-- PROFESSOR: No. Let's see. So-- AUDIENCE: It's because of the damping. PROFESSOR: It's because of the damping. AUDIENCE: Oh, OK. PROFESSOR: Yeah, good. Because that is actually the steady state solution I'm plotting. AUDIENCE: Oh. PROFESSOR: Mm-hmm. OK. [RUSTLING] So if you care about simple pendula-- sorry-- and you want optimal solutions, this looks like a pretty satisfying way to do it. You could up with your arbitrary cost functions and see what you get. It runs in no time on my laptop, and you get things that look like optimal policies, nice phase plots, you name it, OK. What's the catch? First catch is, how do I do the interpolation? How do I make that transition matrix? So on my pendulum example, I discretized some states. I have a handful-- I've already discretized actions, and I've got some other states over here that they've already discretized. I'd have to be pretty remarkably lucky to have it that the random actions that I chose, integrated for some small amount of time, actually landed right on top of one of my other states. In fact, they tend to land in between the states, OK, so we do a little bit of interpolation between them. And one of the reasons I showed you that transition matrix form is that it's actually quite OK, quite standard, to say that my transition matrix, my T from S to S prime as a function of a is some-- let me just handwave it here-- but is some interpolated set of weights for S1 close. [LAUGHS] OK. Zach just showed me a sign that said, the pendulum works. Having Matlab licensing issues. So we might-- I was hoping to run these on the real pendulum. We'll do it on Tuesday if not today. [LAUGHS] I don't know whey he didn't just say that, but there's a big bright green sign. So let me write it like this for the moment, OK. So if I end up being near some states in two dimensions, I tend to interpolate between the three closest states, OK. So I'll call those Sy and S1-- Si, Sj, Sk, and I get some interpolants, W1, W2, and W3. They'd better sum to one, OK. And there's actually lots of ways to do that. So actually, in previous times I've given the class, I went into some detail about that. I think that you could-- if you care about it, there's a lot of ways to do that. You could use the Matlab interp2 function. The one we use is called barycentric interpolation. In the RL community, that was popularized by Munoz and Moore. That'll be cited in the notes. And it uses-- if you're operating in an N-dimensional space, it uses N plus 1 interpolants. So in a two-dimensional space, it uses the three closest points. If you're in a four-dimensional space, it uses the five closest points, OK. And there's a very clean, simple algorithm to find the factors of that interpolant, OK. The caveat is that everything spreads out. If I simulate my dynamics, my graph dynamics, what it's roughly saying is that if I started from this state, I'm going to be a little bit in that state, a little bit in that-- a little bit in state 48, a little bit in state 52. And then my transition's out of there, so I get this diffusion across my graph of where my state is, if that makes sense. Yeah? And that's why you get some of the smoothing effects that you saw in the plots, OK. There's a bigger problem with that. The smoothing effects a lot of times don't look too dangerous, but they can do bad things to your solution if you're not careful, OK. So the big caveat is the solution you get is optimal only for the discrete system. We hope that it's approximately optimal for continuous, but compared to the finite element analysis world or the computational fluid dynamics world, or other people that solve these kind of problems, we have relatively less strict understanding of when-- of how bad this approximation can be compared-- based on the discretization. There might actually be people out there that know it. I don't know how to tell you how bad it's going to get with the appro-- with the discretization. But I will ask you on your problem set to plot the bang bang solution of the double pendulum-- or, sorry, of the double integrator, and plot the analytical solution on top of it. And you'll see that if you're not careful, it's not just a little bit wrong. It can be systematically wrong. The switching surface turns out in the wrong place. And we'll ask you to think a little bit about why that is, OK. That's really the only caveat if you about low dimensional problems. The more cited one, though, of course, is that there's this curse of dimensionality. The only reason that everybody doesn't use this stuff is because if I had a 10 degree of freedom robot and I had to break up that 10-dimensional space in discrete points, discrete buckets, and made a graph, I would need a bigger computer. Not just a little bit bigger, an exponentially bigger computer, OK. So you have to be able to discretize your space, and discretizing the space is exponentially expensive in the number of states, OK. But so people actually-- historically, value methods were very popular in the '80s, say. And there's a lot of work that we're going to talk about that continues to be popular, about using approximations, where you don't do a strict discretization, but you do it so to try to approximate these costs, these dynamic programming algorithms with function approximation. But because of this sort of curse of dimensionality, a lot of people switched gears to a different class of optimization algorithms based more on the Pontryagin principle and more on gradient methods. We're going to talk about those, too. But I think we have to remember that since the 1980s, our computers actually got a lot better, OK. Sounds silly, but so in the '80s, they could tile two-dimensional spaces, and three-dimensional hurt. Now we could probably do four, five, six-dimensional spaces, OK. We actually did for-- we made that airplane land on a perch by just tiling the state space and doing brute force computation on that, OK. So you should look around. If there's some hard control problems that are four-dimensional or less that you consider to be unsolved, you could probably just hand them the dynamic programming and get a very nice solution, OK. And say, hey, you couldn't do it 10 years ago, but I can do it today on my laptop. Awesome. OK. So unless Zach appears here, there's only one last thing I want to say, and that is I want to observe quickly-- we talked about the fact that optimal policies are not unique. But there's more things you can learn by staring at these guys a little bit. Let's put my R down to something more manageable. Go, go, go. OK. Can you see it in this? It's a little bit hard to see it. I think you can see it if I turn the lights down. This is the quadratic regulator again. Now this isn't quite the quadratic regulator from the double integrator. This is now a quadratic cost function on a nonlinear dynamical system, OK. In this case, the dynamics are smooth. They're non-linear, but they're smooth. There's nothing that changes abruptly in the derivatives. And the cost function is smooth, but you can find that the optimal policy can actually still be discontinuous, OK. So costs-- so why is it discontinuous? In this case, because if I'm here and I'm going this way, I want to push up, but at some point, I have to change my mind and go the opposite way to pump up energy and get to the top. So this pump up strategy is inherently discontinuous, OK. So this is the Gordian knot of optimal control, is as soon as things stop being linear, computing optimal cost to go functions can get arbitrarily hard, OK. And that's why computation's so great, because it does that stuff for me. But know that it doesn't take much to make it so the cost to go function gets a lot more subtle. Mm-hmm. Good. So the class will proceed taking these methods as far as we can, breaking them, and then showing you approximation methods that work in higher dimensional spaces. And when we give up on optimality all together, we'll do motion planning, and we're going to get to more and more interesting robots. But this is really a key idea. So I hope that the intuition came through and-- through your problems set. And I can share some of this code and everything. I hope you play with it, and think about it, and change cost functions, and see what happens. OK, see you next week. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_12_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu RUSS TEDRAKE: Today we're going to do sort of the second part of our discussion on walking. I want to start with what sounds like a little bit of a remedial question in some ways, in numerical analysis of these walking kids. Because it forces us to be more careful about talking about the dynamics through collisions and on Poincare maps. OK. So I actually want to start the discussion by just asking given we've defined a walking robot with some stance dynamics, and some sort of collision dynamics, like we did for the rimless wheel and the compass gait and the neat compass gait, quickly. I just want to ask the simple question. Can you find the stable limits cycles or even any periodic gaits of the system? OK. So the first question, I want to find equilibrium, fixed points of the Poincare map. OK. So remember we talked about Poincare analysis of limited cycle behaviors last time. And we said that this continuous time, stable oscillation could be nicely characterized as a discrete time, fixed point analysis. If we're able to, for instance, for the van der Pol oscillator-- and call that x the next dot on the van der Pol oscillator. If we're able to design some surface of section, and just look at the state of the system every time it goes through that surface of section, then I can build a Poincare map. On the Poincare, I've got a state x, which is just some potentially nonlinear mapping from the nth crossing of this map to the n plus 1 crossing of the mapping of the surface of section. So given I have some sort of bisecting surface here, what I'd like to do is now find the fixed points of that solution, which means xp star is just p of x p star. Simple enough. And that should tell me where I've got a fixed point, and, therefore, where I have a limit cycle. It doesn't actually imply necessarily that limit cycle is stable. If it was stable, then you could imagine finding the fixed point, for instance, of the van der Pol oscillator just by simulating the van der Pol oscillator for enough time. I'll eventually go and do the work for you and find its way to the stable fixed point if you just run along long enough. But if it's unstable, that's not as easy to do. And for systems like the compass gait, it turns out the basin of attraction is sort of small enough that you could spend a long time trying to find initial conditions, which happened to simulate their way into the stable fixed point. OK. So you want to do better and get some better tools. Given what we've talked about in other parts of the class, if you want to define the roots of this-- the fixed points of the system, how would you do it? I just slipped a little bit and gave you half the answer there. How would you do it? I've got some big, complicated function p. I want to find the fixed points. Close. So I want to find the roots of some function of xp. So it's equivalent to finding the roots-- the roots of pxp star minus xp star. So if I can evaluate p through simulation or whatever, then I can evaluate this. And I could just hand it to MATLAB or something and say, find me the roots of this nonlinear function. OK. And that's easy enough. We talked about the Newton method for finding roots quickly, and we talked about the constraint solvers that SNOPT and the like uses. OK. So in fact, we could just hand this to SNOPT as is, and ask it to do its thing. And it would do numerical derivatives and would find its way to some zero of that function. We're going to be a lot better shape, though, if we can compute the gradients of that analytically. And hand it to SNOPT. It would take a lot fewer trials. And I want to sort of do that exercise quickly, computing the gradients of the Poincare map. Because it's going to be broadly useful in control. And because it forces us to really think about how we define that Poincare map. OK. So what we'd like to do, we just say it's easier if we have partial p partial x available for the optimization. OK. Just think ahead for a second here. How would you compute if I would just ask you on the street, I've got a Poincare map, how would I compute the gradients. What would you think? What would you start trying to do? Yeah. So where we analyze the perturbations before? What kind of tools go with that idea? Yeah. So again around any given point which defines the trajectory, we could do backprop through time if we cared about a cost function. But remember the trick behind backprop through time, computing those gradients, the reason that was so efficient was because we were trying to find the gradient with respect to a scalar output. So backprop through time did a forward pass, and then connected it to that scalar cost function output. And then backed up a minimal state vector. The forward version we talked about was just computing the gradients marching forward. And I called that real time recurrent learning, RTRL. And that's actually sort of 80% of what you need to compute the gradient here. Is there something I did wrong? No. OK. So RTRL, which we've talked about before and I'll say again real quickly here, is a lot of the answer. But there's a subtlety that I want you to see. OK. So let's be a little careful about how we go from xp of m over to xp n plus 1. That mapping is sort of a complicated-- there's a transition from this discrete time system to the continuous time dynamics, a simulation of those continuous time dynamics. Potentially, an impact map at the end. We talked about the rimless wheel doing the Poincare map right at the impact. So let's do that carefully. So let me define xp at m as being equivalent to, in continuous time, x, the original vector-- I used a p to denote the discrete time, Poincare thing-- at the time of collision of the nth collision. In the walking case, we always did our Poincare map at the same time as the collision. And more generally, you could just think of this as sort of a collision with a surface of section. OK. In the walking case, because the collision has some dynamics, we want to be explicit about whether it's before or after those dynamics, before or after the collision. So I'm going to call our discrete time state of our system to be the state of the system immediately after I process that collision dynamics. So I'm going to put a plus there. Minus would be just before the collision. Yeah. They're both at the same time, but there's an instantaneous dynamics. OK. So I have this. I know how to go from xp to a continuous time thing. And then I know how to go from this to march that forward. If I know when the next collision is, then I know that had better just be x of tc plus n plus the integral from tc plus n to tc minus n plus 1. Let's just say my dynamics are x dot equals f of x. I forget about control for a second just to keep it simpler. Once I go from my discrete time thing to a continuous time thing, I just integrate my equations of motion forward until I collide with the surface of section again. And I want tc defined somehow. I want to somehow define that surface of section. And to define that surface of section and that collision time, I'm going to define-- or more generally a collision-- as some function that equals zero. There's some manifold of states that I can describe by this function, which will define my surface of section. So what was it for the van der Pol oscillator? I designed my surface of section to be this thing going forward. So what the surface of section is defined as some function which just happens to be zero here. So a perfectly good candidate would be like the distance, the signed distance let's say, between your current state and that manifold. You OK with that? I don't think I said that very well. But if I want to know-- define my surface of section, just define a function of state, which is non-zero everywhere and zero where the surface I care about is. Yeah. And that's going to allow me, as you'll see, to do things like taking the gradient of knowing when I'm going to collide with that surface of section. So this could just be for instance the signed distance between, let's say, the foot of the robot and the ground. Yeah. So when the robot's foot hits the ground, that thing's going to be zero. When it's below the ground, it'll be some negative distance. When it's above the ground, it'll be some positive distance. OK now that's going to allow me then to define this. At a time of collision, this thing is going to add better be zero. Good so I would I'd want to define it in a way that doesn't include that. So I could put a-- so the robot's foot colliding with the ground doesn't have that problem, if you think about it. But I do typically put something like a velocity term in it. Let me just write down what I would use for the rimless wheel. Actually, I've got it here. So I do phi of x is just sine, the sine function of theta dot times theta minus gamma minus alpha. So if you don't remember all those symbols, that's almost useless. But this is saying-- the sine of theta dot tells me if the robot's moving forward, then I'm going to look for the collision with the ground at this angle. And if the robot's looking backwards, I'm going to actually look at the collision with the ground at that angle. And that's how I end up getting the surface of section which doesn't go-- just goes in the top half and bottom half of the thing. So you can divide whatever function, but you just want it to be not 0 here. OK we're getting closer to having this be well defined. In fact, this is a very common way to define a collision in any hybrid system. So systems that are continuous with discrete impacts are called hybrid systems. If you wanted to simulate a hybrid system in MATLAB with ODE, then you would very much do that by defining a function, which they call a collision function or a collision event function, which has got 0's here. And you can hand, as your options to the ODE solvers in MATLAB, event functions like this. And they will very carefully watch a crossing of that event function across 0, and pinpoint the time and state at which that thing goes across 0. So this is a very common notation in hybrid systems kind of worlds. It turns out if we do care about finding this gradient of the Poincare map, then this is going to be also be the enabling thing which allows us to compute that gradient. OK So we have this x of t minus. We've gone from just after a collision. We've integrated forward over a continuous dynamics. Yeah. So now we're at the time of the new collision, tc minus n plus 1. We're going to compute the new dynamics by having some collision function. In the rimless wheel, it was just the thing that took out some energy because of the inelastic collision with the ground. So the final piece of the puzzle here is x of p n plus 1, which is equivalent to x of tc plus n plus 1, which is some impact dynamics. I'll just call it big f of x of tc minus n plus 1. In 90% of cases that I would care about, it's only a function of x. I like to write in the more general form in case you have things like moving obstacles. Let's say. In the surface of section case for limit cycles, I think it'd be pretty unusual to have a direct dependence on time. But it doesn't make an our derivation any more complicated than if you had-- you wanted to worry about collisions with moving things or something like that, you could do that. OK. So this is a sufficient recipe now for simulating the Poincare map. If I want to just evaluate in the simulation with the Poincare map, someone says I'm at state xp on the nth Poincare map, I'm going to turn that into my continuous time thing. I'm going to integrate forward my dynamics, just like we always do. But I'm going to stop at a particular time, tc minus, which is defined as the time when it causes phi to equals zero. And then at that time, in the case of the walking robots or any collisions, physical collisions, you might have to implement the discrete collision dynamics. And that takes you back to the next map. Good. So how do we take the gradients of that? It's almost trivial, almost trivial. We know how to take the gradients off of this. We'll do it again, just in one line. But we know how to just integrate forward the gradient dynamics to compute the long term gradients of x. The only subtlety is that-- so when I'm computing the gradient, I'm changing xp a little bit. I'm trying to do a sensitivity analysis between x. If I change xp a little bit, how is it going to change xp n plus 1? So the only subtlety is that when I change xp a little bit, it can change my collision time. So you have to make darn sure when you're taking the gradients, that you capture the changes in collision time that are due to your change in the initial conditions. Other than that it's almost exactly the RTRL code that we've done before. And that turns out to be not so bad, so I want to do it. All right. So we can just go forward with the chain rule here, so dxp of n plus 1 dxp of n. Well that's going to be, first of all, the gradient of this f, partial capital f partial x and then times the gradient of the inside. The inside is defined by x at a certain time. So this is where that comes in. OK. So first of all, let me say we're doing we're doing this linearization around some nominal trajectory. We have some xp0 that we're linearizing around. That implies we have an xt trajectory that we're linearizing, around which I'm calling x0. And it implies that there's some nominal time of impact-- time of collision that we're linearizing around, which I'm calling tc0. OK. So the obvious gradient here, partial f partial x, gets us past-- gets up backwards with respect to the impact dynamics. And then we have to figure out what the state of x is relative to the initial conditions, at time evaluated at the original impact time. And then the subtlety is that I can also increment or decrement the index of the integral, the limits of the integral, which has the effect of adding this term again modified by that increment in the limit of the integral. So I get f of x times that increment. OK. So let's do this in a picture. So let's say I have some thing, some manifold defined by phi equals 0. And I have some nominal trajectory that I took to get there. I got to my switching surface. I'm calling this x0 of t. And this one here is x0 of tc0 plus n, to be entirely confusing. It just takes a lot to write. I could write that equally well as xp0 of m. And what I'm trying to figure out is what's the state going to be if I'm going to simulate my system from slightly different initial conditions when it hits the map. So using my multi-colored approach here, if I take some increment in x0 and I simulate my new dynamics forward, then maybe I get something pretty similar. But at time-- what my original end of time, if I evaluated this integral for the same amount of time, there's no reason to expect I would get back to that switching surface. So this is my modified thing at tc0. But what I would figure out is where it's going to be the next time it hits to the surface. So you can do that by figuring out what this difference is. This is partial x of t given the initial conditions at time tc0. That's this term here, corresponds to this. Yeah. But I'd better also add in the dynamics of this thing pushed forward by the amount of time necessary to get me back to the surface for that collision. Yeah. STUDENT: So you never take the derivative of the p function? RUSS TEDRAKE: You do. I'll show you. So in order to get this, we're going have to take a derivative of the p function. I can't tell if people are bored by this, or intrigued by this, or don't care. OK. Does that makes sense? Good. All right. So how do we figure out the change-- the increment in time, given the initial conditions? Well it's defined based on that phi equaling 0 is the thing that defines this collision surface. So it's going to be used in computing that increment in time. OK. So if phi equals 0, then it better also be the case that dphi dxp of n-- and let me just write this-- tc minus tc n plus 1, x of tc minus n plus 1. This is just the derivative of that thing based on the changes of in xp. I've defined that. I'm saying that even though I changed xp, phi had better still be 0. I'm integrating until I hit that surface. So this thing a better equal 0. And I can take that derivative with partial phi tx, partial x-- t evaluated those same places. OK. I'll save myself from writing that last equation, but obviously I can now solve this for dtc dxp. Yeah. Stepping back just a second, I'm making an increment here. I'm defining that, even though I made that increment, I still better get to phi equals 0. I can look at the change in phi that could potentially occur. It's going to also depend on the change in x and the change in final time. And that allows me to solve for the final time that must have made that happen. That must have made it so I get back to the switching surface. Yeah. OK. That's really the only thing you need to know to make all of your tools we've used for loop optimal control, for instance, for the acrobat and the cart-pole work for the walking systems. The additional advantage of thinking about this now as on the Poincare map is in some ways we could do even easier control. OK. So what did I just solve for? I solved for partial p partial x. Absolutely. Please. Yeah. Good I want questions. No, not necessarily. I mean this to say, so it happens-- it happens that oftentimes at the surface of section we want to compute a discrete collision dynamics, which is just some other function which I'm calling capital f. I don't mean that to be directly related to that. I can see that coming right after the integral that's confusing. Want to I call that something different? We can call it something different. Yeah. This is the equation-- well in the rimless wheel example, it actually says I'm going to change coordinate systems back to where my new leg is on the ground. So I'm actually going to do a discrete change in theta, and I'm going to take away some of the theta dot. Because I've lost some energy into the ground And in general, when you have collisions in mechanical systems, if you model them as impulses, you're going to have some function like f. All right. This is equivalent, right, to being the partial p partial x, Everybody sees that? OK. Not a little bit of change of phi. What we're changing, the thing we're changing, is xp. STUDENT: And x0 and all the nots are basically the initial trajectory we would have taken without changing. RUSS TEDRAKE: Yes, because we're doing everything as an increment. We're doing an incremental analysis. So I want to say that the new tc is going to be tc0 plus this increment. I've defined the problem that way, again. So even if I make a small change in xp, I still want it to be that a minute collision point-- I'm not going to call it a collision point until phi equals 0. And if phi's going to equal 0 along everything, then it certainly equals 0 on an increment in xp. Yeah. So it's tempting to say-- so we've got gradient based calculations flying around here. We've been doing a lot with that. I think they're pretty powerful. It's tempting to say that if I have a discontinuity, then it breaks all my gradient calculations it doesn't break them, you just have to be more careful about them. Yeah. OK. So we can take a gradient through the impact of a walking robot. No problem. You just have to use this. OK. This will be the same if you're doing a ping pong robot or something like that. Anything with collisions, a mobile robot that runs into a museum visitors or something like this, would also have an impact map. OK. So that's all you need to use all those methods we did before. OK. So if I wanted to now optimize the trajectory, let's say, of my periodic system-- if I had my dynamics and now we're back. I did this just in the simple case of f of x, but if I had f of x, u and I wanted to optimize u in order to do something good on the Poincare map that minimizes the cost function, I can take the gradients. I can do my optimization. But this idea of actually changing to a discrete time system is a bit empowering too. You can always do it. It's not that you only do it for walking robots. Any continuous time system, instead of looking at it at every t, I could look at it at discrete intervals of time. I could take a ball that I throw across the room, has no impact whatsoever well at the beginning, and then I could look at the system at x of time 0, then x at t equals 1 second, x of t equals 2 second. And I could build a discrete time system for any of these continuous time systems. It's particularly nice to do it on these walking robots, because, well, let's do it. So what have I just done? I've computed partial p partial x, which is telling me that I'm going to have-- I'm approximating, it's a Taylor expansion, of my dynamics around some nominal point. If x nominal is a fixed point, or if I just change my coordinate systems to x bar is xn minus x nominal n . And I could do the time varying thing if I want, or I could just do it simply if xbar0 is a fixed point, then I'm left with about a model like this. Careful here. p's everywhere. p n plus 1 is a xbar p of n. OK. I can immediately look for the stability of that model by taking the eigenvectors, eigenvalues. Stability conditions are discrete time so those eigenvalues had better be bounded by 1 and negative 1. It's not the same condition but perfectly easy to analyze the stability of the fixed points. So now for the-- if I just cared about analyzing the passive walker, I've got ways now by computing partial p partial x, handing it to SNOPT or something to solve to find the roots. I can find the fixed points and it also happens that by computing partial p partial x I can evaluate the stability of those fixed points. Yeah. The rimless wheels fixed points, if you remember, there was two of them. One was a rolling fixed point. One was a standing still fixed point. Both of them locally stable. The compass gait, if you remember the compass gait, It also can have multiple fixed points. It has one fixed point for a nominal slope. It has one fixed point that's walking, and it actually has an unstable fixed point. Now the cool thing is-- so Ambarish Goswami, who's a friend and does a lot of the initial confiscated analysis. If you start inclining the ramp steeper and steeper, then those fixed points change as a function of the dynamics. Something really interesting happens. It's not surprising that these are complicated dynamics. But you actually at some point, at some critical angle, you get a period doubling bifurcation. So you get an extra point that corresponds not to a one step fixed point, but to a two step fixed point. So what does that mean? What does that correspond to physically? What is a two step? What do I mean by two step on the Poincare map? So at x of n plus 2 is going to equal x of n. But x of n plus 1 doesn't necessarily. But every other thing is going to be the same. So what does that physically correspond to in the walking? It's more of a limp I'd say. Yeah it's some asymmetric gait. It's like this. These robots are doing potentially asymmetric thing. Every other footstep ends up landing in the same place but not every footstep. OK. So it's actually-- there's a lot of interesting work just looking at these passive models and just looking at the stability of these passive gaits. OK. I do want to say make one note here, If you're looking at the eigenvalues of A, the way I've defined it. So the trivial condition here is that the eigenvalues of A had better be less than-- the magnitude had better be less than 1. There's different notations for working on the Poincare maps. My notation is to denote x of p where x is the same size as the original vector. Because that's more useful in simulation that's more useful most of our computations. But remember the Poincare map effectively reduces dimensionality by 1 And in the way I've written it here, if you just look at A that comes out of a stable periodic oscillation, there's actually, the way I've done it, there's always a trivial eigenvalue of 1, which doesn't degrade the stability of the system. What does that eigenvalue correspond to? If I take my stable rimless wheel, I compute A. I take the eigenvalues. I see that the system stable. It goes to my nominal thing. This is the standard stability criteria for discrete time system. But I notice that there's actually always a trivial-- there's one eigenvalue that equals 1. Good. People see that? There's one direction that I can push it in which it does not reject that disturbance, and that's the direction-- if you push it along the limit cycle. So there's two ways to do it. I'm sorry to make it complicated, but there's two ways to do it. If I were to have redefined my coordinate system on the map, I wouldn't want to call it x anymore, but something that reduce the dimensionality of the map by 1. And then I could use this condition alone. But in the way I've done it in the notes and in life, I like to keep this sort of as the same dimension of x. And then because I want limit cycle stability, it's absolutely the case that one of the eigenvalue-- there is a direction in which those disturbances are not rejected. That's what I want. Yeah. OK. So the way I describe it in the notes is that what you want for stability of the periodic gait is you want to ignore the first eigenvalue of one. The rest of them better be-- I want it to be only one direction that doesn't reject disturbances. You with me? OK. Then there's some other direction that doesn't reject disturbances and then I start questioning whether it's stable. I wouldn't call it stable in the sense of a limit cycle stability. If I could push in some other direction in state space and doesn't get rejected, then it's not going to return to that orbit. There's a special direction along the orbit, which I'm allowed to push, and that's what defines limits of stability. But there's only one direction. Even in high dimensional space, a 50 dimensional robot going around, there's only one place that trajectory is going forward that I'm allowed to push and not reject the disturbance. Every other one better converge for me to call it stable. OK. We did the case for the non-controlled system. But we're only a stone's throw away from doing the actuated version. I'm not going to write it out again, but let's say-- let me do it a little carefully just to say what I mean carefully. So if I want to do the controlled case, get back to a system like this. We talked about in the policy gradient-- in the policy research world-- that I could then define u to be a tape of u's, or come out of a linear feedback control or whatever sort of parameterization. But I like to think of it as being some function which depends on a parameter vector, alpha, and can generally depend on x and u. So there's a different way to think about control in the discrete time Poincare sense. One way to think about it is, let's say every time I hit a surface of section, or my foot hits the ground in the walking case, why don't I change the parameter vector alpha OK. But I'm going to make decisions only once per step. And then I'm going to execute this policy for the duration of that cycle. And then the next time my foot hits the ground I'll make a different decision about alpha. You can imagine. So let's say that pi of alpha and my compass gait walker, we'll use this example in simulation in a second. Let's say I have a controller that runs during the limit cycle which tries to set my interleg angle to be some desired value. Every step I take the only decision I make is what should that desired interleg angle be for my compass gait. But over the course of the cycle, I'll simulate a PD controller that tries to make that interleg angle happen. But my discrete time decisions are just what interleg angle should I be. Fumi has got a compass gait that works with open loop trajectories that play out, where his major parameters are the frequency and phase of the periodic input. It's amazingly stable coming out. It's like a beautiful example of open loop stability. OK. So Fumi and I have been talking about changing the parameters of his open loop controller once per step, in order to try to regulate the behavior of the system. OK. There's lots of ways you could do it. But, generally, what that gives you-- if you just try to compute these exact same gradients again, we've computed the gradient with respect to the initial state. But we could also compute a gradient with respect to alpha. And what that could give me here is a model like this. I call it B because I'm thinking of alpha like a control decision. Where alpha bar of n is the difference between my control decision at n minus whatever the nominal alpha is. OK. So let's say I find a stable limit cycle where I just command the interleg angle to be, I don't know, pi/4 every step. And I can find a stable limit cycle behavior. Well if someone pushes my robot, computing these A and B matrices give me a nice, simple way to make discrete decisions to try to stabilize that robot. If I'm away from my limit cycle, I can correct for those differences by just saying, OK, next time take a bigger step and then take a smaller step. And how would I design that rule to change alpha? How would you design it if you wanted to make a feedback law an alpha here? This is a discrete time linear system. I could just hand it to MATLAB and call LQR, discrete time LQR. Give it A, B, Q, and R. It'll give me back a negative k alpha, a kx to find alpha. Are you with me on that? So the discrete time summarization of the dynamics in this way can actually make it very natural and very simple to compute controllers, which stabilize a walking cycle for instance, where you make decisions once per step. I want to show you some work that Katie Byl did in my lab, where she did it on the nonlinear case for the compass gait. Instead of linearizing this, she went right with the full nonlinear Poincare map. She just did x of n plus 1 is some Poincare map of x of n parameterized by alpha, or alpha n. And the compass gait, how many dimensions? What it is it's four basic dimensions. There's two angles and two velocities. And it's just about the right size that you can try discretizing and doing value iteration. So Katie had some nice work showing that on the compass gait we can really get a pretty good sense of the nonlinear optimal solution by just popping this discrete time control problem into a value iteration, doing some work, and computing back an optical feedback policy, which tells you where you should step every time in order to stabilize your gait. One better, if you add one more dimension to your compass gait robot, you can actually think about where it is on terrain, and make it walk over rough terrain. So maybe I didn't spend enough time in these two lectures convincing you that nobody knows how to make robots walk on rough terrain, but-- oh shoot, it's going to take a second here. But you might know for yourself that you don't see a lot of walking robots walking on rough terrain yet. Big Dog's sort of the exception. Big Dog seems to be pretty good. We've got a little dog upstairs in our lab, which is also pretty good I think. They're different. Little dog got can see the terrain and Big Dog can't, so we're supposed to be doing the long term research for Big Dog's idea. Now we've had that entry music here. It's a very dramatic simulation that follows. OK. Here's Katie's work on compass gaits on rough terrain. A little compass gait robot. Its making control decisions every footstep. It knows where the terrain is. Pop it into to a big value iteration solver. And this thing can walk over almost anything, It applies to talk at the hip width the PD controller I described, and it also puts in an impulse at the foot, one time every post impact. So it can push itself forward a little bit. So this thing it's pretty good. And it's just converting this sort of complicated looking problem into a valuation problem. It doesn't think about the slope of the terrain. It could. But the reason it's tractable, actually, if you can tell by the footprints, it wraps around. So we've got a limited state space in x. We ignore it. If you look real carefully, I think the leg gets a little shorter every time it goes. But the mass looks like there's no dynamics like that. We built the robot the same way to have a little tweak toe that pulls up. It's trivial in simulation. Yeah. How do we do it in real life? Not as well as we'd like. Fumi has got a design that's got a servo that pushes as fast as we can. We've talked about pneumatics, that would be a little faster. But we haven't run that, run them yet. It's interesting. So she actually did the case of just hip, just foot. They both work. They both stabilize the walking on moderate terrain. But together they're much more stable than the other. The thing with the hip one, it always gets in these configurations like this and falls backwards. You need a little bit of foot energy to get over. And the one that pushes off of the toe keeps putting its foot down in the wrong place. There's one thing that Katie's simulations had problems with. You guys in my lab know. But there's a pretty sort of very visual-- I should find the video maybe. But there's a very visual thing that we had problems. What was the problem I always talked about with valuation iteration? Even if you could pile the space, you got to watch out for something. Yeah. Your discretization, if there's hard discontinuities, you can do things wrong in your discretization. Your discrimination can be a poor approximation of your continuous thing. So Katie stimulated a bunch of terrain that had holes in it like cliffs. And had these beautiful policies that would choose their step across Karate Kid sort of style terrain. But every once in a while, the mesh points would land in the wrong place and the stupid thing would put its foot right in the middle of the hole, right down into the ether. This is sort of the textbook case of value iteration resolution problems. OK. So walking isn't any harder than any other robot really. There's mechanical challenges. You have to carry your actuators. You're typically dealing with actuator saturations a lot. So it's definitely underactuated. I don't mean to say it's easy, because I think the acrobats are rich and hard. But it's not significantly harder than the acrobat and this is, remember, the acrobat dynamics. The only two differences are that you have to worry about-- you tend to think about Poincare maps to define stability, and you have to worry about these collision dynamics. You get 100 degree-- I'm not showing you 100 degree of freedom robot here walking along but you could. I mean we're getting there. So the dork hall type methods that we're using with the LTV linearization. We think that's going to work pretty nicely for a little dog and that's 36 dimensional, walking on very rough terrain. So there's been a lot of work in the control of this, sort of outside the optimal control view of the world. I'd be doing you a disservice if I didn't tell you a little bit of it. Make sure I said everything I wanted to say. There was one other point I wanted to make. There's something you can do in discrete time that you can't really do in continuous time when it comes to stabilizing controllers. Anybody get it from just that? I said it very obtusely, but yeah Yeah. Good. So in discrete time, I can actually find an action, potentially, depends on B. I could find an action which would actually drive me to zero, in my aerodynamics, in a single step. In continuous time, getting there arbitrarily fast means setting your gains arbitrarily high. In discrete time, that's not the case. I can set a finite magnitude gain that will get me to zero in a single step. So that's called deadbeat control. It's a very beautiful goal to have for a walking robot. if I get perturbed by the terrain or by someone hitting me with a baseball bat or whatever-- you should see the videos that the robotics people make of walking robot-- then a very beautiful goal is to say that before my next foot hits the ground, I'm going to cancel out all the error for my disturbance. Certainly you can do that here if you have no limits on alpha and if B is full row rank. Then it's algebraic equation to compute what alpha n had better be. I could just say, if you want something that could basically look like feedback linearization, I could just cancel that out. Sometimes you can't do it, but I want you to know that's a beautiful goal for control. We'll see in the running bit, we'll see some running models where you can do that B control. The compass gait, if you had the right parameterization, you could do compass gait control. John and I were debating this last night. I don't think that the PD controller is probably going to be enough to give you that B control. Because it won't simultaneously take out your energy and get your foot in the right place. But it's possible. You could actually do the analysis and answer that question. OK. So thinking about these things as discrete time control problems is pretty beautiful. More generally, though, we can do control through the swing phase, in this case. Or between the surfaces of section in a general limited cycle case. And if you want to do that, you could do you could do a shooting method. You could do deer call. Optimize some trajectory. You have to be careful because the time can change during your optimization, the duration of your trajectory, but you could do that. And then optimize it. But just like we showed in the acrobat and cart-pole case, people have come up with more-- I don't want to say problem specific-- but more sort of problem specific solutions. So Goswami, Ambarish Goswami did some nice work. Can anybody guess? What's one of the dominant nonlinear control ideas we talked about for the acrobat and cart-pole. PFL, but that led to energy shaping. So Goswami did a nice controller based on energy shaping. And he showed that you take your nominal confiscate with its fragile little basin of attraction, which he computed by sampling. Pushing the system until it fell down. And he showed that with an energy shaping controller, which is derived exactly the same way we derive the other one, but I can regulate the energy of the system to put me back on the energetic orbit. Because this thing is with zero torque. It's passive in the swing phase. So I can just drive myself up to the place where I will passively fall down and hit the right place on my Poincare map. And that worked locally. That's a good idea. There's another bunch of work that's become popular in our world. Eric Westervelt and Jesse Grizzelle and these guys they talk about hybrid zero dynamics. And more recently, our friend who's going to join the lab, the advisor in Manchester. These guys have been doing sort of similar work using more optimal control derivations and similar hybrid zero dynamic kind of methods. Let me just cartoon the idea. It's sort of similar to the PFL idea. But these hybrid zero dynamic methods are one of the places where we actually have proofs of convergence. And the way they do it, like in partial feedback linearization, is that when the system is-- let me say it carefully here. They find some desired trajectory of the robot, which has a periodic gait that they like. It has actuated joints that follow this trajectory and passive joints that follow this trajectory. If you do a collocated PFL, if you just regulate the dynamics of your system, you're actuated joints to follow that trajectory, then the passive one pretty much has to do the same, has to do the right thing. OK. So the results that these guys on the hybrid zero dynamics-- they talk at the zero dynamics of the resulting output dynamics you drive to drive to zero. Yeah. When your PFL controller has done its job-- and has it doesn't have to be a PFL. It could be a speed controller. When you've driven all your actuated joints to the desired trajectory, if you parameterized that trajectory off let's say the ankle angle, then your whole big complicated bipedal robot, five link ten link whatever. Looks like a remote wheel rolling around the stance foot. And that's a beautiful idea. So maybe the bigger idea here is actually. A lot of the walking robots you can think of as having a lot of actuators at all the joints and just having one passive joint. So a pretty good idea for a walking robot is to just drive all of the actuated joints as a function of the passive joint. And that gives you a dynamical system in one variable. It's a more complicated dynamical system than the rimless wheel. It's one that you can design but the analysis reverts to basically the rimless wheel type analysis. OK. So this hybrid zero dynamics is roughly that idea. And then you can sort of do a rimless wheel. I have had this debate with Eric Westervelt many times. Yeah. I think it is. Yeah. That's a good idea. Just like PFL is great. We should use it for a little while until we figure out something better to do. I think they'd admit that too. Their stability guarantees are when you've driven the aerodynamics to zero. You squash the aerodynamics. And you have to have gains that are high enough to squash those arbitrarily fast. Now the bigger idea I think is that I mean you can do sort of softer feedback, and get some of the same type of performance, but the theory does depend on squashing. So all this stuff I talked about I think is sort of the right way to think about walking control. It's a small fraction of what people actually do in the walking robot world. By far, the dominant approach is these Honda ASIMO type robots. Anybody see HRP4C yesterday. What does it look like? It looks like a model. They actually videotaped a Japanese model walking in high heels, and they built one of their newest that's the AIST robot by Kawada Industries. It's got now sort of a female Android head on it. And it works. I mean, I just saw a few videos. But it looks pretty good. The shapes are more feminine and the gait was a little bit looser. But most of the walking robots out there today, most of the successful ones, are all pretty similar to ASIMO. And ASIMO doesn't do any of this stuff really. ASIMO plays one trick. I can say it is sort of in a single line. They try to keep one foot flat on the ground, and assume that foot is bolted to the ground. And they pretend they're fully actuated system and do their trajectory tracking. And when they're doing their trajectory tracking, they have to make darn sure that foot doesn't roll off the ground. But if you watch the videos of ASIMO, you're going to see it's always got one foot flat on the ground. Running is a small excursion. They just give up on stability for a little bit, until they go back to the place where they can catch on the ground. OK. That's relatively unappealing compared to these I think, because it uses a ton of energy. I told you in the first lecture it uses 20 times as much energy as a human when it walks. People have used these type of methods to make a robot that uses the same kind of energy economy as a human. And it's just walking with its foot flat on the ground. It doesn't work on rough terrain It's not walking as fast as we are. All the energetics of heel strike, you have to do all that with active control because you can't sort of reduce your collision if you're going to land with a flat foot, so this is a better way to do things. Excellent That's a quick preview into the world of walking. And really the key message here is it only takes one or two more tools to turn it back into the acrobat problem. OK. Midterm on Thursday. If you have questions about that, ask John. Ask me. If I do disappear, I apologize. But it's for a good reason. And we'll have a good spring break. And I'll see you soon. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_17_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN W. ROBERTS: We'll be talking about-- I heard last time I had bad handwriting. And I guess this isn't much improved yet, but I will try to be more deliberate if not more skilled. Stochastic-- all right. So as you may remember last time, we were talking about different assumptions that we've used in all the techniques we've applied so far. Now, we assume that we have a model of the system, that the system is deterministic-- that's not really any better handwriting, but this is the one that last time we talked about getting rid of anyway, right? Stochastic system, stochastic dynamics. So that's what we tried to remove. And then the state is known. So sort of already gotten rid of this one in some of our discussions. And today we're going to talk about what you do if you don't a model. And this is something that's actually very important in a lot of interesting systems. And the systems that we work on the lab, some of them we try to model, but some of them help us to model. So this, dealing without this, is a very useful thing to be able to do. So hopefully, you'll all at the end appreciate the tremendous power of model-free reinforcement learning. So the basic idea is, again, we have this policy parameterization alpha, which somehow defines our policy. And the problem sets that you recently did, it's open loop, so you just have one alpha for every time step. You also can imagine these are gains on a feedback policy, entries of the K matrix, or PD gains, any way you want to parameterize it. And you think about you use-- these parameters-- now, this is the most simple interpretation. There's a lot more complicated ways of looking at it, but I'm going to look at the most simple way first. You send this into your system. So you can run your system with these parameters. Now, this is, again, sort of like what you did in the problem set. You have a fixed initial condition, fixed cost function, you give it a policy, you run it, and you see how it does. And so what you get, you get J. You get the cost of running that policy. So the question is that, previously we've talked about, OK, if you have a model of the system, there's a lot of things you can do. You can do back prop to get the specific gradient; do something like SNOPT, you can do gradient descent using that. You can do, depending on the dimensionality, you can do value iteration. So there's a lot of options when you have a model. But if you don't, if you don't know how the system works, if it's really just a black box where I have a policy parameterization, I get a cost, how do we achieve anything in that context? We don't have any sort of information about how these things relate to each other. Well, the thing is we do have some information in that we can execute this black box. We can test it, right? We can run our policy and see how well it does. So what would you say is the crudest thing you could do if you had a system like this, a black box? You give it an open loop tape, let's say, you run it, and it tells you the cost. What could we do? AUDIENCE: SNOPT could also-- well, not that we'll [INAUDIBLE] SNOPT. But SNOPT could also-- or you have methods for estimating the gradient. JOHN W. ROBERTS: You can do finite differences, right? AUDIENCE: Yeah, do finite-- JOHN W. ROBERTS: So finite differences, exactly. So what you can do is you can say, let's talk about, again, this is a notation. [? So ?] what we're using is simple. A lot of times they parameterize it in a different way. But yeah. So you have pretty much in this context-- let's say we have a deterministic cost. So we don't have a random system. We'll talk about random systems later. Let's say we have a deterministic cost, which is a function of our alpha. So what are our parameters, our parameter vector? So what we do is we say, OK, let's say I have a 2D system alpha 1, alpha 2. Now, we don't know what this function is right now. But let's just say it's a simple function like this, convex, where these are the contour lines, and what we want is we want to get to the middle. So this is sort of the local min, and we start here. Now, what-- how SNOPT could get these gradients, and sort of the simplest thing you can imagine doing, would be, all right, well-- one of the simplest things you can imagine doing is actually another very simple thing. You measure here. So you run the system. You get J at this point. So you run the system, you get J at this point. You run the system, you get J at this point. But you take these differences, divide by your displacement, and what you get is you get some estimate of the local gradient. And if those distances are small enough and your evaluation sort of nice enough, you can get arbitrarily close to the true gradient there. And this will tell you, OK, you want to move in this direction, right? Now, the problem with this is that you have to do n plus 1, where n is the number of dimensions, evaluations, to get this. So you sort of have to evaluate at alpha and at alpha plus delta 0, 0, 0, dot, dot, dot, alpha plus 0 delta 0, 0, dot, dot, dot, et cetera. Now, obviously, these things just have to be linearly independent, actually. But you might as well do it this way. Do these finite differences, you get an estimate of the gradient. You can hand it to SNOPT and SNOPT can try to do fancier things. Or you can do gradient descent, where you get this gradient, you compute it, and then you do an update where I say, OK, now my alpha at n plus 1 equals alpha at n plus some delta alpha. And you can say, OK, delta alpha equals negative eta and then dJ d alpha. That's a vector. And so this is our learning rate. That says, OK, we have the gradient here. How far are we going to move? Setting that can be an issue. But you update your alpha like this. And you can just keep doing that over and over again, keep evaluating it over and over again. And eventually, you should move in to the 0. You should get to a local min. The thing is that doing n plus 1 evaluations every time is expensive. Now, you could say you could cut that down a bit if you were to reuse some evaluations and stuff like that. But the point is that you have to do a lot of evaluations to get this local information. And if you move very far, you sort of have to discard those and do all those evaluations again. So short of doing a lot of evaluation to get sort of an accurate estimate of the gradient right here. Then you're throwing a lot of it away when you move, and the gradient could change. And so in that sense, doing all these evaluations maybe is wasteful, because you're getting more-- you're sort of being more careful than you have to. And then you're just going to lose that information once you move somewhere else and have to evaluate it again. So there's another thing. This one, you could say, is even more crude. This is, at least in evolutionary algorithm [? screen, ?] I think they call it just hill climbing. I mean, all these things are sort of hill climbing or valley descending. But what you can also imagine doing is just having a point here, and now we just randomly perturb that. So I don't do this. This is deterministic, right? I could randomly perturb and just be like, OK, well, what if I'm here? That's worse, right? The cost is higher. So you just throw it out, don't use it. Do it again here, that's better. So now we just keep this. And we just do this over and over, discarding bad ones, keeping good ones, until we get back there. But the thing is, is that there you're doing all these evaluations. When they get worse, you're just throwing them out, and you're acting like they give you no information. But there is information in that. Even if it gets worse, and by how much it gets worse, how much it gets better, there's information in all of that. And you're getting information when you do the evaluation. So I throw it away and just sort of like cast out these things that do worse. So that's sort of the idea of the stochastic gradient descent, is that we're going to follow this sort of like random kind of idea. Well, instead of doing this deterministic evaluation of the local gradient, we're going to randomize the system, and we're going to get an estimate the gradient stochastically, and then we're going to follow that. And we're going to get as much information out of that as possible. That's one of the important thing, is generally these systems, this evaluation is all the cost. Pretty much everything is dominated by checking your cost of the [INAUDIBLE] policy. So you want to get as much as you can out of each one. And stochastic gradient descent is sort of a powerful way of doing that when you have no model. That's definitely more efficient than hill climbing. So now the question is, what is the appropriate process for doing this? How do we randomly sample these guys and actually improve our policy? So I'm going to write down an update. This is a common update. It's the weight perturbation update. It also shows up in an identical form in reinforce, if you see any of those. We'll talk about all those. But when you can look at the [? performance ?] update-- is my handwriting at all legible? [INTERPOSING VOICES] JOHN W. ROBERTS: Yeah? OK. So you want to look at this [? performance ?] update. Take my word for now that this makes sense. Well, changing the alpha bits. OK. So I'm saying, change your alpha. Here we have the same learning rate. So this is like in the deterministic gradient descent. And then here's where you evaluate. And this z is noise. So when you perturb your policy, this is sort of the vector of how you perturb that alpha vector. So this is sort of-- this is a z, this is a z, this is a z. Those z's are these perturbations to this. So what you can do is we can say-- a simple and a very common way is to have the vector z is distributed as a multivariate Gaussian, so where each element of z is iid with the same standard deviation, mean 0. And so you sort of draw this z from your-- you draw a sort of sample z, you evaluate how well it does, you evaluate how well you do with your sort of nominal policy right now, calculate this difference, and then you move in the direction of z. So I'll try to draw this in 1D and then 2D so it makes sense. So here in 1D, you can say this is our 1 alpha. If this is J, so here's our cost function. So we'll be here. Now, our z in this case is just a scalar. But so our z is going to be mean 0. And it's going to have a Gaussian distribution. But when you sample from this, you evaluate-- I should actually probably keep that update up at the same time. So you sample, you get this change. So this is sort of my J alpha. This right here is my J alpha plus z. And imagine this change. That's going to say, OK, the cost went up. It went up by some amount. That's the difference. I'm going to move in the direction of z. So z is just a scalar. Here it's just going to be sort of the sign and the magnitude of it. And then I'm going to move sort of opposite this. So I perturb z. z went in this direction. It got bigger. That change, then, is a positive number. So we're going to move down by amount sort of eta that change, right? And so if it gets a lot worse, we move down farther. If it gets a bit worse, we move down a bit. Does that makes sense? And so when you're measuring here, you're going to get small change for the same z. When you, again, you draw your Gaussian around that, if you get a small change, you're just going to move a bit. When I'm here, where it's really steep, I'll get the same perturbation, I'm going to get bigger change. I'm going to move even farther. And I'll update here. And so if I do this a bunch of times, you can imagine I descend into local min. Does that make sense? And this is every time you're drawing the stochastically. So you're not doing this [INAUDIBLE] term thing. Every time you do it, you could be updating, you could try worse, you could try better. But stochastically, you can sort of intuitively see why it's going to sort of descend. Does that make sense? AUDIENCE: This is heavily depending on the fact that the function is sort of [INAUDIBLE] direction? JOHN W. ROBERTS: It's sort of what? AUDIENCE: It's like the function that you're looking at, if you're looking-- if you increase, like in this case, like alpha in one direction versus other one, the changes are sort of similar in both ways. JOHN W. ROBERTS: No. I mean, that can affect the performance of the algorithm. But yeah, I can draw that. These are sort of common pathological cases. Let's look at in 2D. So this is what you're saying, right? Now, the ideal one would be-- again, we can draw a contour map again. Now, you'd be-- this is about the same, right? You're saying this is about sort of isotropic or whatever. You're here, you perturb yourself randomly so your Gaussian is going to put you anywhere here. And you measure somewhere. You get better, so you're going to move in that direction, depending on what eta is, and I'll get an update. You're saying, well, what happens if we're actually in trouble, and we have something that looks like this, right? Saying that's a problem? Well, that can hurt the convergence of it. It can be slower. But it still works. Because you can see-- like, let's say I'm here. Now, it's really steep here, and it's really shallow here. So what's going to happen is when I perturb it, I'm still going to-- my perturbation in this direction is going have an effect-- maybe it's relatively shallow-- but then in this direction it's going to be very sensitive. And so when I move it more in this direction and I move very far, and I'm going to go down here first-- I'm going to descend the steep part-- and then slowly converge in on the shallow part. That's sort of called the, I think, the banana problem, where you sort of have this massive bowl, and you go really quick right down here, and then really slowly. And so the thing is that if it's all very shallow, that's not a problem. You can make your learning rate bigger, you can make your sample further out, and then it sort of just doesn't matter, right? But this asymmetry in these things is an issue. Now, there are some ways of dealing with that if you have an idea of how asymmetric it is. We can talk about this later. But it'll still descend. And actually, you can show, and I'm about to show, that this update actually follows an expectation it moves in the direction of the true gradient. So, I mean, randomly it can bounce all around. But in expectation, it will move in the right direction. And if you're having deterministic evaluations-- well, we're going to do a linear analysis at first. But you actually can show that it'll always move within 90 degrees of the true gradient if you have deterministic valuations. So you'll never actually get worse. You can move parallel and not improve, but you'll never move sort of the wrong sign of [? those two ?] parameters. All right. So then yeah. Let's look at why that is in some detail then. So again, our delta alpha, same up there. So I just won't waste time rewriting it. And let's do-- let's look at it in a first-order Taylor expansion of our cost function. So look at it locally where you look at sort of like the linear term. So our J-- and we linearize around alpha. So our J of alpha plus z, well, that is approximately equal to-- for small z-- alpha plus dJ d alpha transpose z. So that's the first-order Taylor expansion. Now, if we examine this then, we plug this in for J alpha plus z in that update, we're going to cancel out of this term, our J alpha term, and we're going to get that delta alpha approximately negative eta dJ d alpha z z. So now, what does this look like? This is sort of like a dot product between the gradient with respect to alpha and our noise vector. All right? So and this is going to be about equal to negative eta. This thing we can then write [INAUDIBLE] i is 1 to N of dJ d alpha i zi times vector z. All right. So here then, if we multiply that out, so we're going to get this vector and eta, because you're multiplying that coefficient times each term individually. You're going to get the vector. And then the same thing. This one's going to be some dJ d alpha zi zN. Now, if we take expectation of this, we get another distribution. We know that each zi is iid. Do you know iid? That they're all-- they're all distributed the exact same distribution, all the sort of mean 0, Gaussian, standard deviation sigma. And they're all independent. So if we do that, we can then take the expectation of delta alpha. We can pull that eta out front, because expectation is linear. And what you'll get is you'll get the [INAUDIBLE] again, dJ d alpha-- i is not a random variable. So pull that out. dJ d alpha i, again the sum z-- sorry. Expectation of zi then z1. Now, this sum goes through all the i's. But the first one only has that z1, right? Now, zi, z1, they're independent, mean 0. So you can sort of split these up, and you're going to get that they're 0 for every term, except for the term where i equals 1, and the second one where equals 2, et cetera, right? All the other terms are going to go to 0. So it's easy, then. To get the expectations, you go through the sum, and you're going to see that you only have the one where you have expectation of z1 squared, expectation of z2 squared. Now, the expectation-- again, maybe you remember variance equals expected value of x squared minus expected value of x squared, right? Now, we're mean 0, so this is 0. Our variance is sigma squared, So our expected value of x squared is sigma squared. So that means that each one of these expectations is going to be sigma squared. So you're going to end up where you have negative eta-- now, they all the same sigma, so we can pull that out-- sigma squared. And you're going to have the vector of this dJ d alpha 1, dJ d alpha 2, et cetera. So you're going to get dJ d alpha. So yeah, so the expectation, this update, when we look at it in this sort of linear sense, is eta sigma squared-- so just these are scalars. They just change the magnitude of it. But it's in the direction of the gradient. And eta is sort of our parameter. We can control it. That makes sense? AUDIENCE: Is that sigma squared? JOHN W. ROBERTS: Yes, yes. Sorry. Yeah, sorry. Yeah. So the noise you use pops out here. Comment-- I actually oftentimes in one of the other-- when we look at this algorithm in a different way, they write the update where it's eta over sigma squared, your noise. And then that cancels out that sigma squared, and you purely just get eta dJ d alpha. So you can put that in, too, if you wanted to really just be eta times your true gradient. But the important thing is that you'll move an expectation in the true direction. So a couple of interesting properties to this. Here, you see we still have to do-- we still have to do two evaluations to give rid of the update, right? If we want to cancel out that J alpha term, we're going to have to evaluate it twice. Now, it doesn't matter for three-dimensional, and we only have to evaluate it twice. But we still have to evaluate it two times. And the question is, well, what happens if you don't evaluate it at J alpha? What happens if you only evaluate it once? Well, that's a very common thing to do, actually, and doesn't actually affect your expectation at all. Lots of times, instead of this sort of like your perfect baseline where you evaluate it, people sometimes average the last several evaluations to get that baseline-- oh, sorry. I don't think I defined baseline. This right here, whatever it is, is your baseline. Now, there doesn't have to be J of alpha. It can be an exponentially decaying average of your last several evaluations. That's going be approximately J of alpha. And it won't be perfect, but the point is that it's not going to affect it, and we're going to see that. Maybe you'd expect that you need to get rid of that term for you're still moving in the direction [? of ?] [? your ?] gradient, because you can imagine if you don't have that, if you don't know that term, you could evaluate-- if it's always positive, you'll-- I'll draw a diagram, make this clear. If you don't have that, and you're here, if I-- let's say I just make that 0. I'm going to evaluate here, that's going to be a positive number. So I'm moving in the opposite direction. If I evaluate here, it's going to be positive number. So you're going to move in the opposite direction. So maybe you think like, oh, without that baseline we could be in bad shape. But actually, you'll move more in this direction when you do that sample than you move in this direction when you do the other sample. And so that scale here, the fact that you move proportional to how big the change is in your cost, it means that in expectation, you'll still move in the direction of the true gradient. Now, in practice you won't do as well. It makes sense that you won't do as well. Really, when you think about it, that's going to be bouncing all around crazily. But it'll still move in the direction the gradient. And you don't just have to take my word for that. If you look at this update again, now we can do linear expansion again, and you'll get this dJ d alpha z plus, say, some scalar-- this is uncorrelated with the noise. That's an important thing, though. It's uncorrelated with the noise z. Now, use expectation again. Expectation is linear. So we have expectation of this term. That's the same as it was before. That's the gradient. And then we have expectation of negative eta dz. Now, E is uncorrelated with the noise. These are both scalar, so you can actually pull them out. Expectation of z, it's mean 0. So this won't affect it at all. So really, your expected update will not depend it all on what you use here. So you could put a constant there. You could put in the exact one. You could put in some decaying average, anything you want. It will still move, in expectation, in the right direction. But in practice, it can a huge difference. I don't know if anyone's implemented these things on-- but a good baseline can be the difference between success and getting completely stuck and not moving anywhere. So if you do small updates, you should still be OK. But performance depends a lot on getting a baseline. Or it can depend a lot. Sometimes it doesn't matter. Right. So the-- yeah. So again, a common thing to use here is that you're evaluating, you're updating. Let's say every time I do one evaluation, I update. If I took my last 10 of them, I averaged them with decaying sort of weight so that the most recent one is the most heavily weighted, then I'm sure you'll get an approximation of how much should it be around here. And then I update based on that. And that way you don't have to evaluate it twice every time. And so that way, you can actually get sort of improved performance. And it's still going to work. And another cool thing, this is sort of when we go back to our assumptions about deterministic. It doesn't have to be deterministic, either. Let's say in the same way we put in this, instead let's say we put in noise like, again, like a scalar noise to evaluation w. Oh, I just got [? color. ?] Now, that's going to show up in here again. Now, it's not a-- now it's a random variable, so it has an expectation. But if they're uncorrelated, we can split them up. We can-- that'll be equal to negative eta expectation w expectation z. Now, we know that z is mean 0 again. That's 0. So it's not going to affect either. We're still going to get this term. And so you can add sort of additive random noise, and you'll still move that through expected direction, the gradient. So that's sort of cool. This is quite robust. You can have these errors in this baseline. You can have noisy evaluations. You can have all sorts of these things. And still, expectation will move in the right direction. So that's nice. We're going to see that that has a lot of practical benefits. Is everybody with me here? I don't know if I went through this quickly or if-- everyone's sort of being quiet. They look sort of-- AUDIENCE: w is baseline there? JOHN W. ROBERTS: No, no, sorry. This w I change it to noise. Sorry, this is a noise. Maybe you'd prefer it to be called like xi or something like that. But this is just added noise. So you could say that z is drawn from-- it doesn't really matter the distribution as long as it's uncorrelated. We could say it's drawn from some other Gaussian. And so it's expectation-- I mean, expectation of this really can be 0, too. Because if it's not non-zero-- it's not mean 0 noise, then you might as well just put that in your cost function and make it mean 0 again, right? Yes? AUDIENCE: So the idea is to add this into the term J alpha? Or replace the term J alpha with different baseline? JOHN W. ROBERTS: Replace it, right. AUDIENCE: OK. And then so what cancels-- so when we talk about a Taylor expansion? What cancels-- what-- JOHN W. ROBERTS: Nothing. Nothing cancels it. You see, that's the thing. Yeah, so I put an E here-- maybe I'm reusing too many things. AUDIENCE: Oh, is it J alpha is also uncorrelated to z? JOHN W. ROBERTS: Well, J alpha, J alpha is just a scalar, right? I mean, it is some number. So it is-- AUDIENCE: z is your mean, so. JOHN W. ROBERTS: Yeah, so z is your mean. So whether-- we could put in J alpha. We could put an estimate of J alpha that has some error. And then our J alpha minus this is going to be some number-- doesn't matter. If we just put in nothing at all, then our error is sort of that J alpha term. That J alpha term is just, again, some number that's uncorrelated, gets rid of it. Does that make sense? Everyone looks sort of just-- AUDIENCE: So it's actually, putting another constant in that equation for the update makes you move more in some random z-direction. But on average, you're still going down the gradient the same way. JOHN W. ROBERTS: Yeah. I mean, you can move more. Yeah. I mean, if you put some-- if you put some giant constant every time you update, maybe you'll bounce around farther. But on average, you'll still move in the right direction. Because you'll move farther in the right direction than you move in the wrong direction. So they sort of cancel out. So everybody is on board here? OK. I just really want you to-- AUDIENCE: Why wouldn't you include the actual J alpha? JOHN W. ROBERTS: Well, because if you get it by evaluating the function, if you run a policy, it can be expensive to get that J alpha, right? Because for example, I use this in some work I did where we had this flapping thing. I'll show you videos of it. Maybe I'll start setting that up right now. But so we have this flapping system. And we get-- we sort of have souped it up now so it's a bit quicker. But it used to be every time I wanted to evaluate the function, I had to sit there for 4 minutes and have this sort of plate flap in this water and measure how quickly it was going, all these things. And so to evaluate that function once, it took me 4 minutes. And so avoiding evaluations is important. And so if you can just take your several previous evaluations, average them together-- now, it's not going to be a perfect assignment, but maybe it's an OK estimate, and then you don't have to spend any more time. And so in that sense, it's sort of cheaper. Please ask as many questions as possible, because this is-- AUDIENCE: But at some point you have to measure every time, right? JOHN W. ROBERTS: You have to. Yeah, you have to measure every time when you want to do an update. But the thing is that-- here. Let's say i, a tiny one-- but the question is, if I have some estimate of that-- let's say my current sort of alpha is here. Now, I need to randomly sample something, so I have to do that evaluation. Now, the question is, do I have to evaluate it here, too? Because this is my J alpha. Do I evaluate that? Now, I could estimate this, because have a bunch of other evaluations from however I got here, right? So I've already evaluated. If I average those together, I'll get a pretty good idea of what this is. If I wanted to get it exactly, I'd have to run my system here, and then run it again here. And so every update would require two evaluations as opposed to just one. Now, sometimes it still makes sense to do that evaluation, though. Depending on how your system is, if it's really noisy, if you have to do really big updates, it makes sense. AUDIENCE: [INAUDIBLE] using this delta alpha would you calculate [INAUDIBLE]? JOHN W. ROBERTS: Pardon [INAUDIBLE]?? AUDIENCE: Yes. JOHN W. ROBERTS: I'm sorry, I didn't hear what you said. AUDIENCE: This new alpha that we have, that we have the [INAUDIBLE] before-- JOHN W. ROBERTS: This one? Yeah. AUDIENCE: You calculate it by having a previous alpha, and then we did this thing, and-- JOHN W. ROBERTS: And I moved in that direction, right. AUDIENCE: Right. But you're saying that you don't want to calculate the value for this new alpha. Instead we use like, for example, 10 past history of J of alpha, and use that as your estimate. JOHN W. ROBERTS: Yeah. You're saying that doesn't make sense to you? AUDIENCE: It does make sense. In some cases I can think [INAUDIBLE] actually [INAUDIBLE] if the change-- a small change in alpha would have a huge effect on the end value [INAUDIBLE] from J-- like, if you have a very discrete-- like, [INAUDIBLE] condition pass over [INAUDIBLE]. JOHN W. ROBERTS: If you move very violently, yeah. So I mean, that's a good example in practice. I mean, there's things that we have in the theory like this expectation stuff. And there's things that I've applied to several systems. And in practice, when you have like sort of really bad policies, and you need to move really far in state space-- let's say that right now you're trying to swing up a cart-pole, and you're not going anywhere near the top. And your reward function doesn't have very smooth gradients, and so you can't just sort of swing up a bit by bit by bit. Well, a good thing is, is to put in place possibly very big noise, a very big eta, and then do these two evaluations. Because if you-- it's going to change so much every time you do it. Like for example, if you jump and suddenly you're doing a lot better, then your previous average is not going to be representative. And then you can actually bounce around. You can bounce around so violently in this big space of policies that you never improve, right? I don't-- maybe I should draw a diagram to make this more clear, what I'm saying. But the key thing is, is that, yeah, if you're moving these really big jumps, and your cost is changing a lot every time, and you still want to sort of move in the right direction, doing two evaluations can make sense. Because if you're stuck to where you don't have good gradients in your cost function, a bunch of little updates which slowly would climb aren't going to give you anything, because maybe they're not even differentiable. Maybe you have some sort of discrete way of measuring a reward, like how many time steps you spend in some goal region, or something, and you don't have any time steps there, there's no gradient at all right now. And so you need to be violent enough in sort of your policy changes that you eventually get it to where you're into that goal region. And once you get in that goal region, now you have some gradients and you're in good shape. So that's actually another thing that I was going to talk about. But designing your cost function is extremely important. There are cost functions that can be extremely poor and doing this can work really poorly on. And there's cost functions that can make it a lot easier. So if you have a cost function which is relatively smooth, if it's-- ideally it doesn't have this sort of banana problem. If it's relatively same in all the different parameters, it can work a lot better. And you can sort of formulate the same task lots of time, since lots of times your cost function isn't what you really want to optimize. It's just of a proxy for trying to get something done. That's what Russ talked about he didn't care about optimality. It's like, here's a cost function that gives us a means of solving how to do this. And so there's sort of a whole bunch of cost functions you can imagine coming up that try to encapsulate that task. Now, if you come up with-- for the perch one, for example, this plane perching, which is a difficult problem, and a problem where the models are very bad-- I mean, the aerodynamic models of this plane flying like that are extremely poor. And we have-- we actually have some decent ones. We spent a lot of work trying to get decent ones. But sort of the high-fidelity kind of region, where you really want to just get at the end, it's hard to model that. So the thing is that, what if you had a cost function, like what we really care about is hitting that perch. So let's say that we give you a 1 if you hit the perch and a 0 everywhere else. Now, that means until we hit the perch, we're getting no information. We could be getting really close, we could be really far away. It's not going to tell us anything. Now, a lot of actually reinforcement learning has these sort of rewards, like these sort of delayed rewards where you get it here, and then you have to sort of propagate that back. When you're trying to accomplish a task like that, that doesn't necessarily work that well. If you measure something like distance from the perch of distance from your desired state, if you get a little bit closer to your desired state, you sort of get a little bit better. And then you can measure the gradient. And so that will make a big difference, right? And so if you had something where you have region of state where you have a good gradient in your cost function, and you're out here, and not getting a gradient, the little perturbations you're going to have to random walk sort of have made you no update at all, because you may get no change. But if you do really big ones, maybe you'll bounce into where you get this region where you're getting some reward. And in that case, these updates are so big that averaging doesn't make sense, a baseline still gives you a big advantage, and maybe two evaluations is worth it. In some of the flapping stuff I did, I did two evaluations, because when I was moving very violently, because averaging didn't work that well. And getting a good baseline was worth the extra time. But when we ended up getting it working, we put it online, and we actually-- we update it every time we flapped. So it was just 1 second, flap, update, flap, update. And that way, we pretty much were able to sort of cut our time in half, because our policies were very similar, our average was a pretty good estimate. It's so noisy that one evaluation, anyway, isn't necessarily that great of an estimate of your local value function. And so yeah. We just did an average baseline. And that's sort of half the running time, right? And so it can be a big one. And so there's a lot of details when you implement it about the right way to sort of put this together, and depending what your cost function is, and how good of an initial policy, what your initial condition and your policy is. But yeah, there's a lot of factors like that. All right. So now we can do some of-- sorry. I can do example of this. So I keep on talking about this flapping system. That's what I worked on for my master's thesis. And so that's sort of what my brain always goes back to, particularly since we used all these methods. But all right. So now I wonder if I can do Russ' thing where he makes the font really big. That's also-- the thing I'm about to run, it's this relatively simple lumped parameter simulation of the flapping system. This is a lumped parameter model of-- let me show you, it's pretty cool-- of this system which I guy in NYU named Jun [? Zhang ?] built this robot that effectively models flapping flight. It's a very simple model. I'll show it to you in a second. But it has a lot of the same dynamics and a lot of the same issues as sort of a bird. So the system, it's a sort of a rigid plate. Well, the one you see here, we attached a rubber tail to it. But the one-- most of these results are on actually a rigid plate, where it heaves up and down, and what we can do is control the motion it follows. I hope that the camera can see it. AUDIENCE: [INAUDIBLE] moonlight [INAUDIBLE].. JOHN W. ROBERTS: Mood? AUDIENCE: Moonlight. JOHN W. ROBERTS: Oh, moonlight. I was like, mood lighting? OK. Make my lecture more enjoyable. All right. So this is the system. You can see we drive it up and down. That big cylindrical disk right there is the load cell. So that measures the force we're applying. And then what we do is we control this vertical motion. How we control it is-- that's an important thing. I talked about how the cost function matters a lot. Well, another thing that matters a lot is the parameterization of your policy. Now, in the last few problems we had open-loop policies, which are pretty simple. You have like 251 parameters or something like that, right? Now, when you're doing gradient descent using back prop or SNOPT, you have the exact gradient. It's cheap to compute the exact gradient, so you can sort of follow this pretty nicely. But When you do stochastic gradient descent, the probability of being perpendicular sort of to your gradient, or nearly perpendicular to the gradient, increases the number of parameters goes up. So you can think, if you're on-- if you're doing a 1D thing, you're always going to move pretty much-- it doesn't matter if you move in the right direction or the wrong direction. That's one of the benefits of this instead of that hill climbing. But you're always trying to get moving in the right direction to get this measurement. Does that make sense? If you think in 2D, you have the circle. You're going to be moving around. You're going to be along-- close to the direction of your gradient pretty often. A sphere, it's a lot easier to be pretty far away. I mean, sort of a lot more of the samples you do are going to be relatively perpendicular to your true gradient. And as your dimensionality gets very high, a lot of your samples are relatively perpendicular. And the thing is that whether you go in the right direction or wrong direction doesn't matter. You'll get the same information either way. Going perpendicular to the gradient gives you no information. Because you'll get no change, and there's no update. So it's still-- the [? cross ?] dimensionality is alive and well. And very high-dimensional policies can be slower to learn. And so those 251 dimensional policies you use may not be the best representation, because they sort of-- I mean, you probably don't need that many parameters to represent what you want to do. So for this, what we had-- and this made a big difference, we tried different things, this one worked really nicely-- was a spline. So we said, all right, if you have time, I'm going to set the final time here. Now that's a parameter, too. Then this is the z height. It's in millimeters or whatever you want. And I'm going to say, OK, I'm going to force it to be at the beginning, in the middle, and at the end-- wow, that's nowhere near the middle, is it? I shouldn't be a carpenter in the 1200s. So what do we do then? We then have five parameters-- now, we've done several versions, but simple one right here-- five parameters that define a spline. So this is going to be smooth. You can enforce to be a periodic spline, which means that the knot at the end, the connection here, is continuously differentiable as well. And then we force that this parameter-- so this number p1, this one is going to be the opposite of it. So it's a negative p1. And that's true for all these. So this way, we have this relatively rich policy class that has sort of the right kind of properties. But we do it with only five parameters. So you can imagine, if we want it to be asymmetric top and bottom, that would double our parameters. And we probably wouldn't want to tie this guy to 0, so we'd even add one more. And when we have the amplitude, you can either fix it or make it free. I can add another parameter. So you can see that as you add this richness, you're going to add all these different parameters. But getting-- using a spline rather than-- this is the height right now, this is the height right then-- it's a huge advantage. Because what's the chance that you're going to want it to move very violently on a sort of like 1 dt time scale? And if you try to do that, you could actually damage your system. Some of the policies that I-- when I was working on this parameterization, I had the load cell break off and fall into the tank once. Luckily it broke off the wires and lost its electric connection before it fell in there, but yeah. So if you come up with a parameterization that appropriately captures the kind of behaviors you expect to see, it can be a lot faster to learn. Now, sort of the warning, then, is that you're only going to be optimal-- the only thing, you're going get to a local minimum in this sort of parameterization space. So if you parameterize-- if I were to parameterize this by saying, OK, well I'm only going to let it be some-- let's say I was going to do like a Fourier series kind of thing and say, OK, it's add this, this, and this-- now, that's not very rich. It's only three parameters. That's good. But I'm going to do all sorts of things that are probably extremely sub-optimal. Now, it's still going to find the best kind of behavior, or the locally best kind of behavior can using this kind of policy. But it could be quite bad. So the actual optimum could be very different. So your policy class, you'd like it to include the optimum. And so that sort of is-- it depends on what the question is. You sort of have to just have a feel for what is a good policy class. How do I get [? my ?] dimension as low as possible, while still having the richness to represent a wide variety of viable policies? So when you're trying to implement these things, that can make a big difference. So yeah. So we set up that. And we could control the shape of that curve. And so that is the policy parameterization we chose. So going back to this code here. Now, I think I can just run this here. This is going to be doing that bit we talked about on-- again, a simple lumped parameter model of that flapping system. So here's our curve. It's this, you see this-- well, this is the forward motion of the thing as it's flapping. This is the vertical motion. So this is sort of the waveform it's following. This is where it is in x position. You can see it sort of goes fast, bounces around-- sorry, this is the speed, not the position. So you can see it accelerates from 0, and then as it's pumping, it sort of oscillates a bit. In practice, there's more inertia and everything, so you don't see these high-frequency oscillations. But this is just a relatively simple, explicit model. This is the shape we follow. So we're following that curve. And we have a little bit of noise to it. And let me-- so now we're going to perturb it, measure again. Try to measure again, and boom, here we are. We got a little bit better. This is our reward, and then we did another sample, and that's our reward. Let's do it again, better. You see we improve quite nicely. And also, notice, relatively monotonically. Now, you might be surprised by that. Because even though we're moving-- we have this sort of guarantee we'll move within 90 degrees of the gradient. That's what I was talking about sort of with, you'll always be within 90 degrees if it's deterministic. And this is deterministic. But it also sort of is this linear kind of interpretation, right? So as you run it, you'd imagine that you could perturb yourself far enough that you got worse. Now, the reason that's not happening is because I'm perturbing myself very small amounts, and I'm updating very small amounts. So all this sort of linear analysis is appropriate. And actually, you can see what I talked about that, that you always get pretty close to the true gradient is there. Sometimes it moves up a lot, sometimes it's steep, sometimes it moves up shallowly, but it does a pretty good job. Now, we can change that and try to sabotage our little code here. Or sometimes you're OK, actually. That's the thing, is that in practice lots of times it's OK if it gets worse sometimes, because allowing it to get worse, being violent enough to get worse, it'll reach the optimum a lot faster. So here, this is our eta parameter. Let's make it bigger a factor of-- let's make it 20.5. I don't want to risk-- [INAUDIBLE] not get worse. AUDIENCE: Is that the noise or-- JOHN W. ROBERTS: Pardon? No, that is the update. So the noise is the same. This noise is still local. But now we're jumping really far. And so you can imagine, we're measuring the gradient. We're moving really far. And now where we've moved to, that gradient may be a poor measurement of sort of the update over that long of a scale. So let's do this again. This is always fun. Oh, there you go, already. That's better. See now-- but you see, that's a huge increase then. That's what I'm talking about, is that there's sort of a sweet spot. And you don't necessarily want monotonic increasing. Like, there's limitations on how violent you want it to be in practice, because on a robot, a very violent policy could break your load cell of and have it almost cost you $400. So you don't do something crazy. But there's also the willingness that-- oh, that's ugly. But you see, I mean, if you bounce pretty far, you can also get huge improvements. And so there's sort of this-- monotonicity in your increasing reward is not necessarily the best way to learn, I suppose. That's from the trenches. I learned that the hard way through many, many hours sitting in front of a machine. So then the other thing that we can do is this eta. Let's decrease eta. And now let's make our sigma really big. Now, this is going to be really crazy stuff probably. But you see, now we're going to measure so far. And we're going to get this sort of-- we're going to try to measure the gradient, but it's going to be just way off, because it's moving so far that the local structure is completely ignored. Yeah. I probably don't have to be nearly as dramatic as this to make my point. But, you know, it's just completely falling apart. Yeah. That's doing as badly as it can, I guess. I think it's like almost [? no net ?] motion, so. Yeah. So the sweet spot, then, is somewhere in between, where maybe you want an eta of, say, 3, and sigma, I don't know, 0.1. Oh, that's probably still too violent. Yeah, definitely. But I think that is-- that's the sort of game you have to play. And how big all these things are depend on a number of factors specific to your system. Like, if your system-- if the change is very small in magnitude, if your cost function is such that it's changed between 10 to the negative fifth and 10 to the negative fifth plus 1 times 10 to the negative sixth, that's changing by very small amounts, right? You could need a very large eta just to make up for the fact that your change is so small. So a big eta-- like, there's no absolute perception on what is a big eta. It's not like 10,000 is a huge eta. 10,000 could be very small eta, depending on what your rewards are. Same thing with sigma. It depends on how big your parameters are. Because, I mean, my parameters here are of order one, which is sort of convenient. Yeah. So there. Yeah. So here we're learning pretty quickly. And so all those sort of things, that's sort of a disadvantage of this technique, is that there's a lot of tuning to sort solve these things, is that-- where SNOPT you don't have to set-- SNOPT you don't have to set a learning rate, here you have to set a learning rate. You have to set your sigma. And when you have really sort of hard problems, there's even more things you have to do. Like, your policy parameterization could affect a lot of things. There's a lot of issues. But sometimes, that's-- sometimes it's the only sort of route you have. Like, the best this can ever do is gradient descent. It's never going to do better than gradient descent. And so there's of a lot of fancy packages out there. When you have better models and stuff like that, you can do better than gradient descent. But while even though you're only going to be able to achieve gradient descent, you can achieve it despite the fact that you know nothing about your system, your system is stochastic, and it's noisy, like that. And so in those cases, it can be a big win. AUDIENCE: So when you were doing this in real life, instead of in space each time, you were sitting for 4 minutes in front of a flapping-- JOHN W. ROBERTS: I automated pretty much everything, yeah. So I was-- but yeah. I mean, this is a little simulation. AUDIENCE: Every interval was like actually it running and-- JOHN W. ROBERTS: Oh, yeah. When I pressed Space, it actually does two-- because this is using a true baseline. I didn't put in the average baseline. So this is running it twice every time I press Space. But yeah. You can imagine every time I'm doing Space, it does this one update and gives me that new point. What I was doing is I sat there, and it would run, and I'd babysit to make sure it wasn't broken. And it would throw up the curve as it was running so I could make sure that the encoders weren't off, just sort of sitting there keeping track of all these things. I was like a nuclear safety technician. I just eat some doughnuts and go to Moe's, and I would have been a good sitcom character. But yeah. So I mean, pretty much just babysitting it. But yeah, every time you did it, every time you got a new update-- like, every one of these points cost me 6 minutes or something, because it was like a 3-minute run for-- basically a 3-minute run for an update. Because I wasn't using averaged baseline then either. I was trying to be more violent. But yeah. And so that's the thing, is that that is the perfect encapsulation of why you want to use this information as carefully as possible. It's because it's very expensive to get a point. Like, here it cost nothing. If I were to turn off the pause, like, this thing would climb up like that. If you're running on a robot, like we want to use this on the glider, every time you watch that glider, you have to set up the glider, fire it off, take all this data, and reset it by hand, and launch it again. So getting a data point there is going be extremely expensive. And so we've actually done some work on the right ways to sample. You can imagine trying to come up with the right ways to have a policy. But sampling intelligently can save you a lot of time. We sort of look at the signal-to-noise ratio of these updates. I don't know if anyone-- some people here probably at least heard about that stuff since they're in my group. But probably talk about that maybe tomorrow. But there's these things you can do that can improve the quality of your performance a lot. And actually, I test on this exact system. I got put on the system, and I ran it with the sort of results we had that's just, this is a better way to sample, and then just with a naive Gaussian kind of sampling, and you learn faster. And in the context of me sitting there and spending my days in New York City huddled in front of a computer, that's a big win. So anyway. AUDIENCE: So when you say change the sampling, you can just change the variance like you would do to a non-Gaussian distribution? JOHN W. ROBERTS: Right, yeah. So that-- yeah. In fact, we used a very different kind of description overall. You can still-- the linear analysis will still work. But it's just a local-- but yeah, there's work where they change-- We also have something where you change the variance [? to ?] the Gaussian, but your different directions have different variances. And so if you sort of need an estimate of the gradient, then-- but you just estimate the gradient to bias your sampling more in the directions where you think the gradient is, so that more of your sampling is along the directions you think are most interesting. And so that can be a win when you have a lot of parameters that aren't well correlated. Like if you imagine if you had a feedback policy that was dependent on-- a parameter is active in a certain state-- like, if I was at negative 2 to negative 5, I do this, and let's say I never get there, then that parameter has nothing to do with how well I perform. And so if you know that, you can sort of-- there's something called an eligibility you can track. And you cannot update that parameter. There's no reason to sort of be fooling around with that parameter when it's not affecting your output. And if you know that, you can do things like that. And we sort of have a way, a more careful way of, shaping all these-- of shaping this Gaussian to learn faster. And it can. And also, just completely very different kind of sampling. Like, it's-- well, maybe I'll try to talk about it. Because I think it's pretty interesting stuff. The math is a little bit nasty, but I'll skip the really ugly steps. And actually, the one with the different distribution isn't even that nasty. But yeah. I mean, we ran it here and it [INAUDIBLE] improvement. Yeah, so. Did I answer your question? Yeah. It's not just changing the variances. It's more complicated than that. Although changing the variances can be a big win. For example, if you knew you had this anisotropy, and if you were to have different etas in different-- if you were to scale everything in your sigma, you could effectively make it squashed in, right? I mean, just a rescaling of this anisotropic bowl will make it right. So if you can evaluate that, you can fix it. But you sort have to know that that's going on. That's about the times you have adaptive learning rates and stuff. Gradient descent, like if you keep moving the same direction, you have a bigger learning rate. You can have different learning rates, you have different parameters. This one, as you get close to a local min, you'll decrease your learning rate and your noise, because you want to sort of bounce around. You don't want to be jumping all across this min. So-- AUDIENCE: [INAUDIBLE] talked about a basically policy gradient when we were [INAUDIBLE].. JOHN W. ROBERTS: Yeah, no. Yeah. I mean, there is-- it's definitely exactly that. It's just stochastic gradient. But yeah, it's all policy gradient ideas. Because we don't-- I mean, these things don't have a critic, right? But you can combine this with some policy evaluation techniques. And you can turn them into actor-critic algorithms. A very simple-- do people know about actor-critic algorithms? That's going to be a subject I think Russ talks about at the end. But the thing is that right now-- well, I'll motivate in a completely different way. We talked about how this baseline can affect your performance a lot, right? Now, a good baseline can make you do a lot better. Now, the thing is that, what happens if-- here we start with the same initial condition every time. But let's say that I actually could be in one of two initial conditions. I can measure this, and then I run it. And the system behaves very differently, or the costs are very different depending on my initial condition. But I want sort of the same policy to cover both of these. So the thing is, if I just did this and I had one baseline for both of them, and I could randomly be putting these in [? different initial ?] conditions or whatever-- or I mean, I could-- there's probably a more sensible way of saying this, but I don't want to confuse the issue. So if you could have different initial conditions, you can make your baseline a function of your initial condition. Does that makes sense? Instead of just having B, instead of evaluating it twice, I could have my B of x. And if my x is here, I'm going to say, OK, my cost should be like this. And if my x is here, then it's like, oh, my cost should be like this. And when I evaluate my cost, when I perturb my policy, I have a better idea of how well I'm doing. Does that makes sense? It probably doesn't, so. All right. So let's say-- now, this is phase space now. Now let's say that I can start in either of these. And let's say that I'm trying to get to-- let's draw this here. I'm trying to get to 0. That's my goal. And I can measure this. But then one of them, I'm going to go [WHOOSH] like that. And the other one I'm going to have to go, I don't know, through whatever torque, [? limited ?] reasons like that or something. So this one always costs more than this one, all right? It doesn't matter how good my policy is. Like, you can imagine just have a feedback policy. It doesn't matter how bad it is, how good it is. I mean, the same policy is always going to do worse here. Now, if you believe that a good baseline improves performance-- and trust me, it does-- then I don't want the same baseline. I don't want the same B for both of these situations. Because this guy should always be around 50, and this guy should always be around 20, right? So what I could do is I could have my baseline be a function of x. And I'm going to be like, OK, here my baseline is 50, here my baseline is 20. And let's say I don't know that from the start. I can learn my baseline while I'm learning my policy. So I can use the same policy for both situations. And then over here I measure my state, and I'm like, oh, over here I'm doing bad all the time. So my baseline is going to be high. And over here I'm always doing well, so my baseline is going to be low. And so in that way you can take that into account. Does that makes sense? it does look like-- AUDIENCE: [INAUDIBLE] this is basically Monte-Carlo sampling and learning. Because each time that you set your-- so your policy is defined by a set of alphas. And then you fix it, you run it, and you get a sample that says what is the value associated with this starting point given this [INAUDIBLE] policy. JOHN W. ROBERTS: Are you talking about Monte-Carlo for policy evaluation? Because Monte-Carlo [INAUDIBLE]. That's like TD infinity or whatever it is. And that's for policy evaluation. That's how you make a critic. The policy is different, right? The policy, you're doing this update, then you're advancing it a bit. Your critic, the way I just described making the baseline for this, that would be a Monte-Carlo interpretation. You could do it with t, lambda, or anything you wanted to. But yeah. So the important thing is-- I mean, it looks like the sort of blank faces after I talked about that. But Russ, I think, is going to go into more detail into actor-critic. But maybe I can talk about that more tomorrow if you want. Yeah. I mean, the important thing is that right now this is a very simple kind of idea we've talked about, where you run the alpha, and then if you ran the same alpha, it would always do the same. Or maybe it just has a little bit of additive noise. But If actually running the same alpha from different states-- which happens a lot in a lot of systems-- the different states could have different expected performance. And so while you'll still learn without the baseline, having a good baseline everywhere will make you learn faster. And so it's worth learning a baseline and learning the policy simultaneously. And sort of the thing we talked about, where you just average your last several samples to get your baseline, that's already we're learning a baseline, right? We're just learning it for everywhere in state space. We're saying this is the same everywhere, right? AUDIENCE: That idea of sampling, can you do something like [? smarter ?] using Gaussian processes to do active learning on top of it to sample in areas that are more promising? Instead of just randomly moving somewhere? JOHN W. ROBERTS: I mean, there are ways of biasing your sampling based on what you think the gradient is. I mean, that's one of the things we worked on with signal-to-noise ratio. I'm not sure exactly what-- AUDIENCE: I know some people worked on Aibos walking, and they wanted to find a gain which maximizes the speed of the Aibos when they're walking. JOHN W. ROBERTS: I think I read that paper, yeah. AUDIENCE: Yeah, and there are like 12 or 13 dimensions. And it seems like a similar problem-- JOHN W. ROBERTS: No, I think they use a very similar algorithm. I think they had a different update, though. It was the same kind of idea. I think that the update structure was maybe different than that. Yeah. So I won't dwell on critic stuff. That's, I think, the last lecture in the class or something like that. But yeah. So here, I mean, this is sort of the sample system. And you can see how this thing is robust to really noisy systems in practice. Because when I ran it on the flapping thing down at NYU, the consecutive evaluations could be very different-- not because of any change in policy, You run the same policy, you get a big variance. So that's just because you're running on this physical robot with this fluid system and you're measuring the forces in an analog sensor. And so it's just very noisy. But it's robust to that. And that's what's so nice. Put that here. So look at that. I mean, this one-- these, luckily, didn't take 3 minutes anymore. They took 1 second. So it wasn't nearly as bad. But, I mean, look how much it's changing. It's changing a significant percentage every time, right? AUDIENCE: These are all with the same [? taping loop? ?] JOHN W. ROBERTS: Yeah. Yeah-- I mean, no, this is playing a different-- this is learning. So the thing is that-- I mean, I showed you how it wasn't monotonic before. But this, you can run the same tape. I mean, up there it's pretty much running the same tape. So up there you get an idea of what the noise looks like when you're running the same policy. Right. And so you can imagine-- yes. AUDIENCE: Just [INAUDIBLE] went with blue and red. JOHN W. ROBERTS: Oh, blue and red are different ways of keeping track of my baseline. All right. So I mean, I don't worry about the different blue and red. They're just sort of an internal test to see the right way to make these things-- we determined that it didn't make a difference. But yeah. AUDIENCE: It looks like the red is much smoother. JOHN W. ROBERTS: I don't know. It may be plotting. I may have plotted blue on top of red or something, too, you know? I don't know. I remember we decided it didn't make much of a difference. Yeah. I see what you're saying. It does look like the variance is a bit less, but I don't think it was. But these are trials on the bottom. So that's, every second we sort of did another flap, we did another update. So this is update from the bottom. And yeah. This is-- we actually have a reward instead of cost here. So it's going to go up instead of down. But yeah. So despite the fact this is really noisy, despite the fact that we had this average baseline, which I was talking about-- so our baseline wasn't perfect-- it still learned. It learned pretty quickly. I mean, 400 samples maybe doesn't seem very good. But that's also less than 10 minutes. So that's like 7 minutes. So it in practice can work pretty darn well. And solving this thing with other techniques would be very tricky. Well, I mean, you could build a model like this model we have and stuff like that, you can try to solve it with a simulation. That's generally how they solve a lot of these problems, is to do the optimization on a model. So there's this fly. Jane Wang at Cornell tries to optimize the stroke form for a fly, like a fruit fly. I think it's a fruit fly scale. And so she just built a sort of pretty fancy model of this thing and then simulates it. It does the optimization on a computational fluid dynamics simulation. And so that's some way we can-- and there you can get the gradients, you can do all the sort of things we've already talked about. Because you have the model, you can do all these things explicitly. But the model takes a long time to run. I think the optimization took months of computer time. So if you-- that's the thing here, is that the full simulation of this system, where it took me 1 second to get an update, it takes, I think, about an hour per flap. So an hour on a computing cluster to get one full safety simulation of one flap. And that's even the simpler one. We're working on other ones, too, that have sort of aeroelastic effects, which are where sort of the body deforms in response to the fluid forces. And simulating those is even harder. And so where it takes an hour to get an update, I can get it in a second. And the thing is my update is going to be noisier and I don't get the true gradient. But when you can get 3,600 updates per update, you're going to win. I mean, I'll get one flap in the time takes me to optimize and sit there for most of an hour, you know? So you can see in those kind of problems, it can be a big win, especially when a simulation is extremely expensive, or computing the gradient is extremely expensive, but you have the robot right in front of you. You can just take that data, accept the noise, do model-free gradient descent. I think that's what I wanted to talk about. If you have any questions or anything didn't make sense at all, please let me know. Otherwise, maybe I'll introduce something that I'm trying to talk about tomorrow, a different interpretation. I'll just try to get your brain ready for it, I guess. But if there are any other questions on this, please ask. AUDIENCE: What was the reward function for [INAUDIBLE]?? JOHN W. ROBERTS: The reward function for this was the integral of velocity, of spin velocity, over the integral of power input. So it measured the force on it, multiplied that by the vertical velocity. That gives you the rate of power. That gives you power, which is the rate of work. And then it just sort of calculates the distance. And so that ratio is what we tried to optimize. So it tries to figure out sort of the minimum energy per unit distance. And so it spins around in a circle, but it's a model of it going forward. So we did it for an angle, but you can do it just as easily for if you had a linear test. It's just harder experimentally. And so it's try-- it's sort of an efficiency metric. Yeah? All right. Turn the lights back up. Make sure I crossed all my Ts, dotted my Is. Oh, yeah. And actually, there's one story, too, before I get into that thing. So a lot of these things originated, like a lot of the things we've seen for neural networks-- like back prop, like gradient descent. I mean, we learned that [INAUDIBLE] originated in the context of neural networks, RTRL did. And a lot of this did, the reinforce algorithm, which is the thing we're going to talk about-- originated with neural networks. And one of the reasons they found so appealing, particularly like this kind of stochastic work, is that it seemed biologically plausible. That it could be like, what is the chance that a human brain is doing back prop? I mean, it could be doing some sort of approximate back prop or something like that. I actually don't know that much about neuroscience. But the thing is that these sort of computationally involved techniques for solving these problems don't seem like they're reasonable as sort of postulations on how the human brain or how neurons solve these problems. But this one, you can see, it's so simple. And the little randomness being part of it and just the sort of like simple update structure does seem biologically plausible. Just sort of intuitively, it makes more sense. But even more than that, there's examples of-- there's data and evidence that suggests that these kind of things could be one of the aspects of how animals learn. And the coolest one, I think, is there's these song birds that learn how to sing. Like, they don't-- they're not born knowing a certain way to sing. But they hear their parents sing as they're growing up, and they start singing more and more. And they get better and better. And actually, you can hear them getting better until they sing like their parents did. And you can raise them in captivity and play them Elvis all the time, and they'll do like a song bird impression of Elvis, which I'm surprised you can't buy the CD of that on late night TV. But the-- right. But so a really cool thing, though, is that there's this part of the brain where, if you measure sort of the signals, they seem to be completely random. Like, they just seem to be random noise. And so it's like-- it's strange that there's not this structure. It's like, what could this part of the brain be doing? Why would it need to be producing random noise? What they did-- and this is your-- maybe you bird lovers out there won't like it-- they took one of these birds. And while it was learning-- like, they waited till a bird learned like the full song. And then they deactivated, through some means, the part of the brain that produces random noise. And nothing happened. The bird-- apparently, the bird wasn't like entirely the same. But it still could sing the songs fine, everything like that. Then they took a bird who was in the process of learning the song-- had learned some of it but wasn't perfect yet and was still getting better-- and they deactivated that part of the brain. And it started just singing the same song. Like, how it'd been singing, it kept singing. It didn't get any better. And so that's some sort of proxy evidence that this random noise was related towards the ability to improve, that it's not storing the signal, that it's not necessarily like the descent itself. But just this random noise could be how it's screwing up its song in an effort to get better and better. It screws up a bit, listens, and maybe it's a little bit better, and it does that. So that's sort of a pretty-- I mean, and sometimes compelling evidence that, I mean, biology could at least use this as some aspect of its improvement, that you shut down the random noise and it stops learning. I mean, if you get the variance to 0, you're not going to get worse, you're just not going to do anything. You're going to keep singing the same song. So that's sort of cool, I think. Right, so just give you something to chew on. There's another interpretation of this. So here we sort of talked about this one. Here I think our idea was this sort of sampling, where we have some nominal policy. We perturb it, measure how good we did, how well we did, measure performance, and update. So this is pretty much what we have, is we have got some policy that we're working at. We add this z to it that changes it a bit. We run it, and then we update. There's a different interpretation-- my performance got too long. There's a stochastic policy interpretation. Now, in this, the way you think about it isn't that we have some nominal policy and we're adding noise to it. It's that your policy itself acts stochastically. So actions are random. Doesn't mean that they're completely random. I mean, they're random with some distribution. But you're not saying exactly what you do. And so you can imagine this is sort of like if you're playing Liar's Poker-- you know, where you hold the card above your head. And then you can see anyone else's card but not your own, and then you sort of bet on these things. Do you know the game? Maybe that game doesn't have enough cultural penetration to be a good example. But if you're playing normal poker, any sort of gambling games, if every time you had the same cards, if you made the exact same bet, people could eventually sort of figure that out, and maybe they could use it to beat you. There's plenty of games like that, where, say, every time I have a certain card, I always bet this. Then if I bet that way, they're going to be like, oh, he has good cards. I'm going to fold. Or, oh, he always bluffs when he has this card. So this sort of deterministic policy doesn't make sense. The stochastic policy is exactly what you do. So your policy is going to be like, oh, I've got like pocket kings, 95% of the time I'm going to raise whatever, and [INAUDIBLE] time I'm going to check-- those kind of things, where you're sort of-- there's some noise in what you do. Now, you can question whether or not that makes sense as whether optimal policies would be stochastic in the kind of problems we look at. But the important thing is just to realize that your policy, don't think of it as it's doing these things. It is these sort of distributions of what you do. All right? So the parameterization, then, controls distribution. Ooh. My fifth grade teacher would not have liked that. But. So what you do, then, you can imagine you control, perhaps, the mean of a distribution. So where over here we had this-- you can think of it as really sort of exactly the same, where my other interpretation said, OK, my policy is alpha, and then I add random noise z to it. While here, my policy is parameterized by alpha, and my action is the same thing. It's just that it's not, this is what I'm doing and I'm sampling something else. It's that, this is actually my policy. If I ran the same policy, I would just do all these things with these probabilities. So it's your actions are stochastic. Now, that's sort of something that isn't always completely-- well, when I first saw it, it wasn't really easy for me to get my head around all what that meant. But yeah. So this is-- we're going to look at this. Yeah, I won't go into more detail. But tomorrow we'll look at this sort of different interpretation of how to do this. And you can get the same learning. we'll actually show that the update's the same, the behavior's very similar. But the properties are a little bit different. And the big thing is that you don't have to do this linearization. Here we did this sort of linear expansion and we say, OK, so this is true locally. When you look at it in this context, you can show that you'll always follow the gradient of the expected value of the policy. All right? And so that's a big difference, right? Here we're saying, OK, we're looking at the local gradient, we're going to follow the local gradient here. But let's say that you have a very broad policy or a very sort of violent value function. Let's look at this 1D one again, where my value function has something like this-- extremely violent. Well, when I put this random stochastic policy, that smooths it out. And so even though-- it's because I have a stochastic policy. Running my policy, the cost is a random variable now, depending on what my actions are. Even if my dynamics are deterministic, because my policy is stochastic, my cost is stochastic, I'm going to get some-- if you look at this, there's some expected cost for running this policy on this. And you do this, you're going to sort of-- you can imagine sort of smoothing out some of this, right? Sort of averaging over all of these. And what you follow when you do this-- and the update is really identical. Like, it's actually just the exact same update. Possibly, there's a coefficient up front that you could put in or not. But the structure is the same. And the thing is that it'll follow the expected value of the performance of the stochastic policy. So it's sort of a different way of thinking. I think that this way is sort of the easier way to sort of first think about it. But tomorrow will be more probability kind of things. And we'll talk about the stochastic policy interpretation and some of the ramifications of that. Yeah. And maybe some other interesting side notes. So yeah. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_15_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, so we added another tool, quickly I admit, last time to our arsenal, saying that if you've got a system where you're trying to do a trajectory plan for and your trajectory optimizers are failing you, because they're only guaranteed to be local, locally good, then there are class of more globally more complete algorithms that are guaranteed to find a solution, if it exists, based on feasible motion planning. So we talked about ROTs mostly. I talked a little bit about also the more discreet planning, A*STAR and things like that. OK, so, so far, our methods are still clumped in very distinct bins. We still have our value iteration type methods, our dynamic programming methods, which I love, which give us policies over the entire state space, so global policies. But they're stuck by-- they're cursed by the curse of dimensionality. So it only works for low dimensional systems. OK, we've been talking also about-- and we've been talking about policy search in general. And I'm going to, later in the class, make a point that that's not just about designing trajectories. I made it initially. We'll make it more compelling later. But mostly what we've been talking about other than that has been falling under the class of trajectory planning and/or optimization. OK. And this is only locally good but scales very nicely to higher dimensional systems. So you might ask, how well does this scale? I don't really think there's a good limit. I mean, it just depends on the complexity of your problem. People have used ROTs very effectively five years ago on 32-dimensional robots. That's pretty darn good, right? If I have a system where the start and the goal can be easily found, Alex says we can do it in thousands of dimensions. If I have a system where the only hope of-- if I have a six-dimensional system where the only hope of finding my way from the start to the goal is by going through this little channel, then I told you that's going to fail, even in low dimensions. So it's a hard question for me to specifically say what class of system should you expect this to work for. But I think they're the best tools we have for higher dimensional systems, OK. So the big question I want to address today is whether these ideas, which seem very local-- we were talking about single trajectory planning-- can be used to design a feedback policy that's more broadly general, that's valid over lots of areas of the state space, OK. Does that makes sense, what I'm saying? Yeah? I'm saying I could design a single trajectory, but that's really only relevant very close to the trajectory. So let's make our favorite picture. Let's say I've designed for the simple pendulum a nice trajectory, which goes up and gets me to the goal. And that's good. If I start here, I know exactly what to do. We talked about stabilizing it with LTV LQR. So that means if I start here or here, I'm in pretty good shape. If I'm smart enough to index into the closest point on the trajectory, then maybe even starting here, it's fine. I'll just execute the second half of that trajectory. But what happens if I start over here or if I start over here? Probably, the controller based on the linearization is not going to have a lot to say about the points that are far from my trajectory. So the goal today is to take these methods that we've been pretty happy with for designing trajectories, and even stabilizing trajectories, and see if we can make them useful throughout the state space, just to see how well that can work. OK, there's a couple of ideas that I want to get to, but first, I want to make sure I say that there's no hope of getting-- there's no magic bullet here. So there's no hope of me finding global optimal policies, unless I'm willing to look at every state/action pair. I'm not going to tell you that I can use these trajectories to just magically do what value iteration did in high dimensions. That's not what I'm saying. Unless you have some analytical insight which turns the problem into a linear problem or something like that, I'm not saying that I'm going to give you globally optimal policies. What I'm trying to say is we can get good enough policies, potentially, using these methods, OK. So I just want to make sure I make the point that we really can't expect globally optimal policies unless we explore every state/action pair, of maybe if we have some analytical insight. OK, so the curse of dimensionality is real. It's not that some-- the value iteration algorithm is a little quirky. It's got this problem of dimensionality. It's really not that at all. It's not that somebody hasn't just come up with the right algorithm. The problem is you can't know if there's a better way unless you look at every possible way. That's the real problem. I mean, so it might be that I want to find my way from the start of the-- front of the room to the back of the room, and I've got some cost function which penalizes for the number of steps I take. But unless I go down that third row, I didn't know that there was actually a-- see if I can say something not ridiculous, but some pot of gold or something in the middle of the third row. And I just didn't see it, and I'm never going to see it unless I go down the third row. So you really can't get around that. So the goal is to really-- maybe we can efficiently get good enough policies. And I don't care about optimality, per se. I've said that before. I just care about using optimal control and the like to turn these things into computational problems. OK? So there's a couple ideas out there that are relevant. The first one sounds a little silly, but it's increasingly plausible. Let's say my trajectory optimizers or my planning algorithms got so fast, or maybe just computers got so fast that I didn't have to do any work in the algorithms, that it takes me a hundredth of a second to design a trajectory from the start to the goal here. I've got a real time execution task here. Every, let's say, hundredth of a second, my control system's asking me for a decision about what to do. But if I can plan fast enough, and I find myself in this state, then you could just plan again. You could really just, every time you find yourself in a new state, plan a trajectory that's going to get me to the goal. If I find myself-- so if I'm executing this trajectory and I get pushed off on a disturbance, no problem. Every step, I'm just planning a trajectory to the goal. If you can plan-- if we teach the course again in five years, maybe that's the only answer. I don't know. If you can plan fast enough, that really is a beautiful answer. For the most part, the problems we've looked at so far are not that easy that you can plan that fast, but there's a middle ground. So this was basically plan every dt. There's a middle ground that people use today a lot. I mentioned it once before. But a lot of times what we do to make real time-- to make the planning fast enough to execute in real time is a lot of times we'll do some sort of receding horizon problem. So how's that going to work? The simplest answer is, for receding horizon, I've got some long-term cost function, and I've got my total cost function is from t equals 0 to some t final g of xu dt. I could-- I did it discrete time. That's fine. So n to capital N for discrete time. And let's say it takes me too long to plan N steps ahead, but I know I can plan three steps ahead really fast. So a lot of times people will actually approximate that with the problem of just looking some finite receding horizon step ahead. And if you can-- if you're doing it at every ti-- if at time 2, you're asking for the receding horizon plan, then you can just look from time 2. So let's say my current time to my current time plus 3 gx of u. That could be an arbitrarily bad estimate of my long-term cost, of course. If you're clever enough to have a guess at the long-term cost, then you can put in some sort of estimate of what j x from t plus 3 might be, and that's going to help. So for instance, let's say I find myself off my trajectory somewhere over here, and I'm willing to say my planner's fast enough. My controller's running at 100 hertz. And in a hundredth of a second, I can about solve an optimal control problem that's of half a second in duration, let's say. That's a reasonable thing. Half a second along puts me-- it would put me here. So let's say I'm going to design-- I'm going to use my planner to design a trajectory that gets me back to this in half a second. And then I use my cost to go that I already knew from this design to get me to the goal. That's one way to implement what I just said, OK. And it's not just talk. I can show you a good example of it. So I showed you guys this once before, but let's just look at it again quickly. This is Pieter Abbeel's and Andrew Ng's work on the autonomous helicopters, OK. So they execute these comically cool trajectories with their helicopter. The way they do it is actually, they get a desired trajectory from a human pilot, and then they stabilize that in real time. They do-- he calls it DDP, but it's actually what we've been calling iterative LQR. I told you that a lot of people blur the lines, unfortunately, between those two. So they do an iterative LQR controller design, and they decided that it's fast enough that they can do it three seconds into the future. So they're doing exactly these receding-- every dt for that control for that helicopter, they're doing iterative LQR to design a trajectory that's going to get me back to my pilot's trajectory. And they're running it every dt, thinking three seconds ahead, and they say, that's comparable to the time of-- the dynamics of instability for their helicopter. Yeah. Put it all together, and you get this thing tracking pretty cool trajectories, OK. It took a lot of good engineering behind that, too, of getting the model right and getting the helicopter right, but it's pretty impressive. OK, so if you can plan fast enough-- and like I said, in a few years, maybe the planning algorithms are going to be-- and the computers are going to be so fast and the planning algorithms are going to be so fast that we never do value iteration anymore, but I kind of doubt it. I think that there's always going to be reasons to do more global methods. If you can plan fast enough, even a little bit into the future, that it might be good enough to just turn your planner immediately into a feedback policy. OK. We don't do that so much in my group. I think it's a good idea, and it makes sense. But I do think there's a lot of other good ideas out there on how to turn your planners into policies. OK, the next one is multi-query planning. Anybody know what I mean by that? AUDIENCE: [INAUDIBLE] PROFESSOR: No, that's not what I mean. You can imagine doing something like that and it meaning this, but-- so I spent relatively little time on the ROTs, but actually, it's one of the tools we think a lot about in my group now. It's actually-- the only reason I spend little time on it is I think that seeing the big idea in class is enough, that the ideas are so simple, that when you do your problem set and make it work, that's the best way for you to learn about it, OK. So it's such a simple idea, and it just works very well. OK, so let's say we've got these ROTs that we like, we know and love. And for the pendulum, I showed you a plot of the ROT trying to find its way to the goal. It started splintering off lots of-- eventually, it'll find some trajectory that will find its way there, but along the way, it's generated lots of trees that do random things and lots of paths that didn't turn out to be useful. And what you have is a web. In this case, it's a tree. If you run the ROT once, you have a tree of feasible trajectories that you could execute on the real robot. It happens that one of them got me from the start to the goal in my initial problem formulation. OK, but instead of throwing all that computation out and just keeping the nominal trajectory, I might as well store it. If I get a new problem, which is, let's say I wanted to start from here, like I said and get to the goal, then really, all I need to do in my new time, in my new planning problem is connect back to my old solution. If I can find a new plan that gets me back here, then I can just ride the rest of the solution into the goal. Simultaneously, if someone were to tell me I want to get to a different goal-- let's say I want to get the system to the upright with some velocity-- all I really need to do is find a way to connect from my old plan to the new goal. So as you design these, the first start to the goal planning problem, where you're designing trees that try to get as-- cover all over the place, could be potentially very painful. We might take a long time to find your way from start to goal. But if I want to solve a new problem which is not so different, then it could be actually very efficient to reuse your old computation and do a multi-query-- this is a multi-query planning idea, OK. And I think that idea is so good that it's actually-- if you do this again and again, you're going to slowly end up with this web of feasible trajectories that you could execute. People call it a roadmap. When you have some network, some graph of these feasible trajectories, people call it a roadmap. If all you care about is getting to the goal, then all you need to do is connect to your existing roadmap and write it to the goal. If your roadmap is so rich that once I connect to the roadmap, there's actually a bunch of different options, bunch of different paths I could take through the graph to get there, well, then at least you've got a discrete planning problem, and you can do A*STAR on it or something like this. And effectively, the trajectories I've already generated will turn this back into a discrete planning problem. That idea is so good that some people believe it's the only thing you need to do, OK. There's a camp out there that does these probabilistic roadmaps that's-- Jean-Claude Latombe, I think, is the head of the camp, started these ideas. And they believe that you should address a complicated motion planning problem in two steps. First you'll construct some dense enough graph, a roadmap, I guess I should call it. And then once you've got it, you just do your query phase, OK. So let's think about that in a configuration space. I've got a bunch of obstacles, and I want to get myself from some start to some goal. All right, if I know I'm going to be doing a lot of these things, then it actually makes a lot of sense for me to go ahead and build a pretty good graph. So before I even start to solve the first problem, let's just drop in a lot of random samples throughout the space, choose uniformly, OK, at the space. Every time I add a point in the configuration space world, they try to connect that new point to the end closest points with simple strategies. So I'll pick a point at random, I'll try to find the guys that are close to it, and I'll connect with it. Pick a new point at random. Oh, there's really only one guy close to it. I'll connect to it. Pick another point. Maybe these guys are connected. And that's it. And if I do it enough, and then I come up with a pretty good roadmap-- maybe this guy was the one that connects to everybody-- that when the query phase comes along, again, all you need to do is connect to your roadmap. I got a new query. I just connect to my roadmap. I do whatever my discrete searching problem may-- A*STAR or whatever to find a path from the start to the goal, OK. I actually think it's a very beautiful idea to have this web of possible trajectories covering the state space. And then all it takes at execution time is connecting and then executing your trajectory. Now the probabilistic roadmaps, again, this step of connecting nearby points in under-actuated systems might be hard. Might be as hard as finding the path from the start to the goal. So maybe what you do here is actually do a [? DR call ?] or something to find that path, or you do an RRT, or one of any of the other methods we've done to make these initial connections. And maybe to make them feasible to execute, you've got to do some trajectory stabilization to get on that. But if you can solve some local planning problems, then you can use these big roadmap ideas to maybe do more global behaviors, OK. So again, I think multi-query planning is a nice way to go from local policies to more globally valid policies. Yeah. AUDIENCE: I can see that working pretty well with a static obstacle field. PROFESSOR: Good. AUDIENCE: Could it move [? moving ?] obstacles, and might-- the roadmap might change? PROFESSOR: Well, I don't really know what the proponents would say. But if you know where the obstacles are, then-- or if you even sense where the obstacles are going to be in a receding horizon quickly, then you could-- maybe this one's blocked, and I can just take another path. But if I have a rich enough road map, hopefully you can get around that. And the other thing is, if I have a model of how those obstacles are changing, then naively, that just adds one dimension in time, let's say, to my plan, and I just have to do a higher dimensional plan. But I think the case you're thinking about is if these things are just moving on their own. I don't have any good model. I suddenly find that I'm obstructed. Then, again, you could dynamically replan or you could-- by either taking a different path here or making a new edge if you had to. I don't think it breaks the fundamental goal. You could almost think of this as having-- in a dynamic sense, you could almost think of this as having a bunch of repertoires, a bunch of things I know how to do. So maybe if it's a walking robot, maybe I know how to take a step here. That's one of my edges. I know how to execute that. I know how to take a big step. I know how to take a small step. It's a repertoire of local skills, local trajectories in this case. Then I just got to stitch them together in the right way. So that's a fairly robust thing, even if-- yeah. AUDIENCE: Given a rich enough roadmap, would you have problems finding-- choosing the best path among those path nodes, like discrete search? PROFESSOR: The discrete search, I think in general, you should think of as being unlimitedly-- basically unlimited. I mean, compared to all these continuous time methods, it's very, very efficient. People doing it on huge collections of nodes very efficiently, especially if you can do A*STAR, if you find a good heuristic. I mean, this is how you can go to-- to maybe overplay the title, this is how you go to MapQuest and you ask it to go from Boston to California, and it just happens. These things are very fast, even with a lot of nodes, a lot of roads. Yeah. Yeah. AUDIENCE: How did it compare to [INAUDIBLE] discretize in state space? PROFESSOR: Good. AUDIENCE: [INAUDIBLE] PROFESSOR: So this-- very good question. Let me answer that first, and then I'll-- yeah. So I almost talked about this last time right after I said, what happens when you turn this state space into buckets, how it's a reasonable thing to try but not very elegant. I think these guys would have put this topic immediately after that, saying, instead of discretizing in some unnatural grid maneuver, we're discretizing here by sampling randomly. That has the benefit that you could, for instance-- you can actually-- you don't have to sample uniformly. Maybe you care more about things in this area. You can bias your sampling distribution, same way you can add more grid cells or something like that. But the real benefit is that it's a more continuous process. It's not stuck in some very discrete bins. OK. Sorry, second part of your question. AUDIENCE: Well, now it would follow, like if we have a discrete [? board ?] and we can run any of those algorithms on [? value ?] [? iteration ?] on top of it, [? we can ?] [? find out? ?] PROFESSOR: Yes, exactly. So why did I say A*STAR. I should have said value reason on the-- right? Yeah. Right. I mean, A*STAR can be faster than value iteration. If you have a good heuristic, you don't have to-- yeah. AUDIENCE: So when you're doing the sampling here-- PROFESSOR: Good. Yeah. AUDIENCE: --you were-- ROTs do this very uniform sampling. And you say you can bias the sample. PROFESSOR: Yes. AUDIENCE: But if you're doing this before you even do your first path, why don't you actually choose [? optimal ?] [INAUDIBLE]?? Can't you do some kind of [INAUDIBLE] diagram with this and just say, I'm going to test using the best I can find, given that I have a model of the world, and then [? get them ?] [? with sampling, ?] [? your ?] [? subcontinuous ?] time, and then you find-- why is the sampling [? still ?] part of this-- PROFESSOR: Excellent point. I think if the problem permits that, then you should absolutely do that. AUDIENCE: OK. So then-- PROFESSOR: I think even for the pendulum, though, I wouldn't know how to tell you what the optimal sampling is, because the way these things connect are non-trivial. They're subject to the dynamic constraints. AUDIENCE: Right. PROFESSOR: Right. So if you could formulate that and solve it for this-- and maybe you can. Maybe people have. I don't know that. I haven't seen that. But then that sounds like a very reasonable thing to do. AUDIENCE: But it doesn't have to be quick, right? PROFESSOR: It doesn't have to be quick-- AUDIENCE: [INAUDIBLE] first time-- PROFESSOR: No. AUDIENCE: --can be as slow as possible-- PROFESSOR: It could-- AUDIENCE: --because you want it-- well-- PROFESSOR: Well, right. AUDIENCE: --times the universe can explode with that, but-- PROFESSOR: Right. AUDIENCE: OK. PROFESSOR: Right. Take a chance to-- this is, explore your system. Build things that are good in random places, and then worry about connecting them later. Mm-hmm. Really good. OK. So again, making these connections in under-actuated systems is more subtle. It might be that there's a lot of one-way connections, but we can still do-- we know how to do graph search, OK. But these are generally good tools, and they've been used a lot in robotics lately. These are-- the other ones, the Rapidly Exploring Randomized Trees, goes by RRTs. These go by PRMs, Probabilistic Roadmaps. A lot of people seem to think that they're competitors, intellectual competitors with RRTs, and I don't think that they are really. I think the RRT guys would just say, well, you just use an RRT to make the connections, and the roadmap is still a very good idea. And I think RRTs effectively make roadmaps. So I think they're very harmonious ideas. Excellent. So that's at least two ideas to take these local trajectory optimizers and turn them into more of a feedback policy. But there's a big one, big one that I like a lot, that I haven't said, OK. So big I'm going to go back to the left. OK, let's say it's idea number three-- these things aren't perfectly orthogonal, but this was the breakdown I was most happy with, OK. Feedback motion planning. OK. So, so far, we've talked about building some trajectory that we thought was good, and then afterwards, go through and stabilize it with feedback. That's not always the best recipe, because you could imagine, for instance, designing a controller that locally looked very good but was completely unstablizable. I go to then-- I'm done with this. I say, perfect, my first stage of my control design picked this trajectory. Now I'm going to run LTV LQR on it to stabilize it. And then I find out, whoops, right there it's not controllable or something and that my cost to go function blows up. Maybe my open loop trajectory optimizer told me to walk along the side of a cliff and wasn't really paying attention to the fact that stabilizing that's hard. Or maybe it was saturating my actuators the entire time-- that's a very real possibility-- and left me no margin of control to go back and stabilize it. OK. AUDIENCE: [INAUDIBLE] [? putting ?] those edges down, that it is actually feasible to go from A to B, so-- PROFESSOR: Yeah. It's definitely feasible to go from A to B, but it doesn't say that-- nothing thought about whether if I get disturbed epsilon from this, whether I can recover. AUDIENCE: So you're worried about the noise. PROFESSOR: I'm worried about noise, right. So it's feasible for me to walk along the side of a cliff, but I wouldn't want to be bumped. If I know I'm going to be bumped, then I pick a different path, OK. So you can imagine-- for maybe each of those examples, you could imagine ways to try to make the planning process more-- let's say, OK, well, don't use your full torque limits. Use 90% of your torque limits. That's a good idea. That'll help. But there's a more general philosophy out there, which is that you shouldn't just do trajectory planning and then stabilize it. You should really be planning with feedback, if that makes any sense, OK. Well, it'll make sense in a minute. There's a lot of ways to present this. I thought the best way would be to start with a case study, someone who-- a problem where people really use this, OK. There's been a lot of people that have been interested in making robots juggle. One of them's been sitting in the room here. The ones that did a lot of the work I'm talking about here is Dan Koditschek's camp. OK. So it's actually very, very harmonious with John's lecture on running, and that's why Koditschek's done both, for instance. Now let's think about the problem with making a robot juggle, OK. So the first thing you need to think about-- and let's make a one-dimensional juggler, OK. So we've got a paddle here, constrained to live in this plane, and we've got a ball, also constrained to live in that plane. Yeah? And your goal is to-- if this thing is in a rail, it can only move vertically, your goal is just to move that paddle to, say, stabilize a bouncing height. Let's say you've got a desired height. OK. This is the 1-D juggler. I think they call it the line juggler by Martin Buehler and Dan Koditschek. Martin went on to build Big Dog at VDI, and now he's at iRobot. So these are famous guys, OK. So the dynamics are pretty simple to write down. You have a mass of the ball. You have some dynamics of your paddle. You assume that the mass of the paddle is much, much bigger than the ball. That simplifies some things. And so now the dynamics are just ballistic flight of the ball. You need some trajectory. Your control is to design some trajectory of the paddle, and then you have an impact dynamics, which these guys use an elastic model-- model it is an instantaneous elastic collision with a coefficient of restitution. That's a reasonable collision model if your energy is conserved. And again, they assume that when the collision happens, the ball changes direction and keeps 90% of its energy, and the paddle was unaffected. Relative to the mass of the paddle, the ball is negligible. AUDIENCE: [INAUDIBLE] juggling the balls are almost completely [INAUDIBLE]. PROFESSOR: That's true. These are, I guess, not-- [? Philipp's ?] are completely almost as hard as possible. In his project, he said he spent lots of time trying to find the perfect ball, which was the perfectly machined, very hard precision ball, yeah. Compliant juggling, maybe that's our next challenge for robotics, squishy balls. OK, good. So it turns out they do a really nice control design. It turns out to be very natural to-- the controller that they come up with for the paddle uses a mirror law. Turns out if you can sense the state of the ball and you just do a distorted mirror image of that ball, then everything gets really easy. Your impacts always happen at 0. It's at the same place. And you can, just by changing the velocity here, you can roughly affect the impact height. So what they do is they can nominally stabilize some limit cycle with just mirroring the ball, and they add an extra term to stabilize the energy to get it to whatever height they want. So they do a distorted mirror image of ball trajectory plus-- the distortion is scaled by some energy correcting term. It's a beautiful thing. Very, very simple controller. Has a nice, very stable solution. In fact, I think they prove it's globally stable for-- you can tell me if-- is it globally stable in the 1-D case? I think it probably is. OK. How do they prove it's globally stable? They do an apex to apex return map. And the same way we did for the hopping models and all the other models, these guys were pushing the unimodal maps and getting some global stability results out of that. That's why I think that they had a global result, OK. So it's actually exactly like a hopping robot. Just the ball's moving instead of the robot. OK, so they got a pretty good controller for 1-D juggling, and then they started doing 2-D juggling. I think I have the vi-- I don't have the video for the 1-D juggling somehow, but I do have the video for the 2-D juggling. Yeah, so here's your 2-D case showing off doing two balls at once, since all that matters is the state of the robot when the impact occurs. So you might as well do something else during the other time, like stabilize another ball. And you can see that actually, it turns out to be pretty easy to get stability in this plane, just because if you tend to-- if you're too far to this side, you tend to get hit earlier, which causes you to go out more and vise versa. So that stability almost comes for free. AUDIENCE: [INAUDIBLE] PROFESSOR: This is, I think, vision off to the side. I know it's vision off to the side, where they're tracking the bright yellow balls. If it had been dark gray balls, it might have been something else. But the bright yellow tennis balls suggests vision. Yeah. And they went on to do the 3-D juggling. This one was, I remember, in the basement of the Michigan AI lab when I was there, behind that curtain. It's always using the vision sensing for the balls. You could do you pretty good things. And then they got so good that they started doing other maneuvers like catching and palming and things like this, OK. It actually turned out to be the same, pretty much, control derivation. They just set the desired energy to 0, and suddenly they have a catching controller. And then this is palming when they're doing their thing, OK. And then they can get it back up to catching with the same sort of energy shaping. And I should show you, you don't actually need all that feedback to do it. You don't need to sense the ball. Here he is. This is [? Phillips. ?] We'll show the one where he's pushing it so you can tell who it is here. Blind juggler. So this is open loop stable juggling. You can see-- actually, do you see the ball up there? Yeah, it's going to a stable height, and he's moving it around. He's got just a itty little bit of concavity in that plate, which gives it all the passive stability properties. And you've got versions where it's doing things off to the side in 3-D or in 2-D and-- yeah, so let's open loop stable. So juggling is actually a really cool problem for robotics. It's led to a lot of nice dynamic insights and party tricks, I guess. Yeah. OK, so these guys said, we got pretty good at juggling. We can do a mirror law to stabilize whatever juggling height we want. We've got a catching controller also, which has roughly set the energy to 0 and just sort of does this step. And they also had a palming controller, which was when the dynamics were actually on the paddle, they did a little bit of different things to be able to move it around without it falling off the paddle. What they were left with was this challenge. And we've got these controllers, which are good locally. What do we do to make them do more interesting things? So they actually-- so if they want to transition, for instance, between the catching and the palming-- the bouncing and the palming, they use their catching controller. Maybe they want to avoid moving obstacles. They want to do multiple balls. They want to do all these things. They introduced a really nice, beautiful picture of feedback motion planning using funnels. OK, so every one of those controllers had the property that it would take initial conditions in state space and move them to some more desirable state. So for instance, the ball hopping, the ball could be anywhere here. By applying this controller for some finite amount of time, when it's done, it's going to be closer to its apex height. In many cases, not in the juggling case, not in the experimental juggling case-- and even in the model one, I guess they do. But in many cases, you actually have Lyapunov functions which describe the way that convergence happens, OK, but it's not strictly necessary. So the idea is, let's think about this thing as a funnel, OK. It takes lots of states in. So this is initial states. And after applying it for some finite amount of time, I'm going to be some-- you get some new final states. And if my controller was any good, then hopefully the final states are a smaller region than the initial states. So in some sense, this is a geometric cartoon for a Lyapunov function. Lyapunov functions take my state in, and descend down, and will put me in some other state, OK. Experimentally, you can also find these things, even if you can't do the Lyapunov function. Experimentally, this input is basically the basin of attraction of my controller. So if it was really a basin of attraction and it stabilized some fixed point, then if I ran it long enough that it was asymptotically stable, I'd call it a basin of attraction. Here I'm just going to run it for some finite time. So you have to be a little careful calling it a basin, but I think it's still intuitive that this is the-- there's lots of names for this. Another name for it is pre-image in the motion planning world. Lot people call this the pre-image of our action. This is, I guess, the post-image, yeah. These are the set of states where my controller is applicable. I'm going to have a funnel for the mirror law. I'm going have a funnel for my catching controller. That takes a different set of initial conditions and gets me where I want to be. And I have a funnel that can allow my palming to do different things, OK. And I might even have lots of funnels. So I might have a different funnel given the mirror law where my desired energy is 4, versus the mirror law where my desired energy is 6. Maybe those should look like different funnels. So the picture that these guys gave us-- this is Burridge, Rizzi, and Koditschek, the guys. Al Rizzi's at Boston Dynamics also-- is that you can do feedback motion planning as a sequential composition of funnels, yeah? So if I want to get from one state to another state, and I don't have a single controller that will get me there, all I need to do is reason about a set of these-- a sequence of these funnels for which the first funnel takes me from my initial conditions into a domain where my second funnel is applicable. And then I can use my second funnel to get me somewhere else, and then my third funnel maybe will get me to my target. So if my goal state is somewhere abstractly here in state space, that's not accessible from any one-- I can't get from my initial condition to my goal with any one of my controllers. I can sequence these controllers, just making sure that the output of one funnel is covered, completely covered by the input of the next funnel. Then that's enough, then, to turn this again, to use these funnels as an abstraction to take away the continuous problem and give me a discrete planning problem, which just says I just need to go through this funnel, through this funnel, through this funnel, and I can get to the goal, OK. So in this case, they did tasks like there was a beam here that was-- they were bouncing on one side. They wanted to be bouncing on the other side. So I think they had one controller that went over the top. Then the beam got taller. They had another one where it caught it, brought it under, started paddling again. And these things just fall naturally out. This could be the catching. Or this could be the-- yeah, catching. This could be the palm, and this could be my mirror again. And I'm right back to where I want to be, OK. Very, very beautiful idea. As far as I could tell, everybody who read that paper was enamored by it, and nobody's really used it that much, because there was one critical problem. Figuring out what those funnels looked like are really hard. So really, the only issue, I think, is that describing the basins of attraction, let's say-- so if you read the Burridge, Rizzi and Koditschek paper, you'll see a ridiculous number of scatter plots where they put the ball in this location, they ran their controller for a while, and they determined experimentally whether it was in the basin of attraction of this controller. Yeah? Ouch, right? That's not what I want to do with my time. So if you're willing to do that, then it's a workable method. But I think today we've got a better way to do it. And my group, we've been working on an implementation of this feedback motion planning idea, which is very much in line with the things we've been talking about so far, which we've been calling the LQR trees, OK. And the big idea that happened is that these guys in LIDS, Pablo Parrilo-- anybody know Pablo? And Alex [? McGretsky's ?] the one who taught me about this-- have figured out new effective ways to computationally estimate basins of attraction of some classes of controls, OK. OK, so this is a new thing. People have been doing algorithms to design Lyapunov functions for at least a decade. But I think they got really practical a couple of years ago in Pablo's thesis, actually. Think it's two Ls, yeah. What Pablo did in his thesis is he promoted this sums of squares programming. In fact, you can even download SoS tools from his-- as a MATLAB package from his website to do this. Sums of squares programs are efficient ways to check whether a polynomial function is negative definite, OK, potentially with free parameters, and so on. These can be made-- and these can be vector variables, can be made uniformly negative definite or semidefinite, OK, or trivially positive semidefinite. Seems like a little-- you can see how it might be relevant, OK. So this is just a mathematical idea to turn the problem of checking the positive definiteness of a polynomial into a linear matrix inequality and then a convex optimization problem. So I'm not going to go into all the details, but know that there's these tools out there that use convex optimization to check that property of a problem, OK. And you can read more. I've got links if you want to read more about that. What that allows us to do now, at least in the case of the LQR design we've worked it out, it's possible to now check whether a function, a polynomial function is a Lyapunov function for the system. Lyapunov functions have to have their derivatives going down over time, yeah. In order for a good function to be a Lyapunov function, its value had better be going down at all times. If your candidate Lyapunov function is even a vector polynomial function, then you can use this to check whether it's a valid Lyapunov function for your system, OK. So we can now-- AUDIENCE: [SNEEZE] PROFESSOR: Bless you. So I threw this one in without saying it before. The only caveat is you have to take your nonlinear system and make a polynomial approximation of it, a Taylor expansion of it. It doesn't have to be first order. That's the linear system. But it has to be polynomial, OK. So suddenly, it turns out that for the LQR systems-- remember, our value function. Let's just think of the LTI LQR. The value function turns out to be this quadratic form. It's the optimal cost to go. That's a Lyapunov function. J of x for the linear system is a Lyapunov function. As I take control actions, my cost to go is only going to go down. It had better, otherwise it's not the optimal cost to go. So OK. If I have a nonlinear system, where I've linearized it and done LQR control, then I expect that to be-- this function to be a Lyapunov function over some domain where the linearization was good, and eventually, to no longer have this nice negative definiteness property. Does that make sense? The optimal cost to go from LQR isn't a Lyapunov function for the entire state. It's always going to descend for the entire-- for any initial conditions for the linear system. But when I've linearized the system, I expect this to be, J star to be, a valid Lyapunov function near the linearization. You guys should stop and ask questions now if you have any questions. Does that make sense? I never actually said before that you can think of these cost [? to goes ?] as Lyapunov functions, but that's a nice connection between the optimal control and the stability theory, OK. But the cost to go actually is a Lyapunov function. Remember, when we're taking a linearization doing LQR, we already know that the basin of attraction is going to be something finite. We talked about that at the Acrobot. I do a linearization around the top. I know if I'm near that point, it's got some small finite basin of attraction. If I'm inside that region, it'll go to the goal. If I'm outside that, then the linear design, controller design, isn't valid for the nonlinear system. Eventually, you're going to get far enough away that it's not going to work. It's going to-- I think I even showed a simulation of it doing something crazy. So what we did was we designed a controller that got up there, and then we turned on the linear controller. All was good. So a different way to say that exact same thing is that at some point, when I get too far from the fixed point, if I evaluate this function and look at the time derivative of this function, it's no longer going to-- my cost is not going to go down as time goes up. And in the case for the Acrobot, if I'm here and I start going this way, then I'm getting further from my goal. My cost is going up. At some point, for the nonlinear system, this function is not going to be a Lyapunov function for that system, OK. So what we've got, thanks to Pablo and [? Sasha ?] [? McGretzky, ?] is a way to figure out exactly a-- well, not exactly-- to estimate the place where that transition happens using these sums of squares programs. My goal here is to tell you about the existence of these things, and I'm happy to push you more in that direction if you're interested, for your project or for whatever. But we're going to use this to do the feedback motion planning, OK. So it turns out J is a scalar. My cost to go is a scalar. It turns out I can very succinctly describe the place where this-- a boundary of this function-- let me just write it, and then I'll say it carefully. I can describe a region of my system just by looking at the height of my cost to go. This is a quadratic function. It's going to look like ellipsoids going out. If you were to draw this landscape, it's going to look like an ellipse, a parabola in high dimensions, yeah. At some point, as I move farther from my fixed point, the cost is going to get higher and higher, OK. And at some point, it crosses some scalar value rho. So the way I want to design, I want to call my basin of attraction for this system the place where my cost to go reaches rho. And we've got a program, thanks to [? Sasha and ?] Pablo, which will try to estimate this scalar value, rho, as a scalar representative of the basin of attraction of my system. AUDIENCE: Why would you use a particular cost to go rather than looking at what the variation is in the linearization? PROFESSOR: That is-- so we're going to determine this by looking at the variation based on the linearization. So I could do this in a lot of different ways. I could look at boxes around my fixed point and try to design some geometry. The real basin of attraction is going to be some complicated thing, which depends on my LQR controller design and the way the non-linearity affects. AUDIENCE: Right, but wouldn't-- so I guess what would seem more intuitive to me would be to say, look at the next highest order term in the expansion and then see how that's varying and use that to-- PROFESSOR: That's exactly how we're going to verify it, OK. So there's two questions. There's a question of what shapes are we going to try to verify, OK. The choice here is to verify contours of the cost to go function. I'm going to try to find the biggest contour of the cost to go function for the linear system. You're asking if I could choose a different shape based on the contours. What we've elected to do-- and I think it's a tighter version, maybe, than what you're saying, but I could be wrong. There could be better ways-- is to find the biggest contour such that the next higher order terms of the linearization don't break the negative definiteness. AUDIENCE: OK, so this is just for purposes of choosing a shape. PROFESSOR: This is choosing my shape, OK. So I'm going to make this all concrete right now by trying to show you an example here. OK, here's a simple pendulum, which we know and love, OK. This is the phase portrait of the simple pendulum. OK, and the green is at 0, 0, which, in this case, is my downward fixed point. The top is my unstable fixed point. My goal is to use these local trajectory ideas in order to cover-- to make all states go to them, OK. Now all I told you so far is I know how to take an LQR problem and try to estimate the basin of attraction, OK. So that's step one, is take a linearization around my goal state, estimate-- design an LQR controller, and estimate it's basin of attraction, OK. And that looks like this, OK. We've seen cost to go functions for the ellipsoids, or ellipsoids around the fixed point. I do a sums of square optimization to verify that this function is negative definite, which involves a higher order polynomial expansion of the dynamics in this form. And I try to find the biggest contour for which that system is still negative definite. That's all the detail. All you really need to know is that I can estimate now, with convex optimization, the basin of attraction of that system. This is Koditschek's funnel at the top. That blue region is the beginning of the funnel. In this case, I'm going to run it infinitely long, so it's going to eventually get to the red point. That's the output. OK, now how do we design funnels that try to fill this space? OK, my proposition is that we should do roughly what the RRTs are doing and start growing out to try to cover the space in lots of different directions, OK. The only difference is, every time we grow out in random directions, I'm going to stabilize that trajectory with an LTV feedback and compute the basin of attraction on it, OK. So here we go. Pick a point at random, OK. And actually, I'm not-- I don't always play the RRT trick. So I could do lots of RRTs to try to get back to that point, but I'm actually going to just use this as my goal and do [? DR call ?] to get me there, to design a trajectory to get me there. If that works, that's perfectly fine. So that's a trajectory. I didn't draw it nicely. It actually starts here, goes this way, wraps around, and comes to that red point, OK. From [? DR ?] call, it quickly designs that trajectory. Now let's back up and start computing the cost to go function, the Riccati equation, backwards to stabilize that trajectory. And as we go, we'll compute the basin of attraction of that controller, which has exactly-- I drew it in finite segments, but I hope you can see that's exactly the funnels, yeah? If I start the system inside any of that blue region, and I execute the trajectory, the LQR, the trajectory stabilizer along that trajectory, it's going to take me around and get me to my goal and stay there, OK. So this is feedback motion planning happening. Now the cool thing is, I told you about the multi-query idea. I told you about all these ideas, talked about making very dense trees that handle all these situations. If you know the basins of attraction of your existing controller, you don't have to build a very dense tree. I know, if I were to pick another random point that was already inside my blue region, I'm not going to get a lot of value out of adding nodes inside that blue region. So let's pick another random point, and if it's inside the blue region, we'll throw it away. If it's outside, I'll keep it, and I'll try to grow to it, OK. So I get another random point, which is here. Going to pick the closest point in my current tree, which was just, in this case, a trajectory, connect that back. Now in this one, my dynamic distance metric, which was that LQR distance metric, connected and said this was the closest point. Looks a little surprising, but maybe the torque limits said that one couldn't get there, or maybe my distance metric just wasn't perfect. But that's reasonable. It tries to go from here, add a little bit more torque, and drive out. And I stabilize that with the funnel, yeah? OK, now I have two trajectories, and I've got a pretty good coverage of the space already. You can imagine I design a handful more trajectories, picking the state as I go, and I can really quickly and efficiently fill that state space with funnels which take me to the goal. Does that makes sense? AUDIENCE: So when you do your multi-query, how do you choose which funnel you're in? PROFESSOR: Awesome. Well, first, even-- so if I just want to execute this, yeah. And I want to get to that goal, it might be that I don't really have to do the-- so you could think of this as being every time being a multi-query thing. So every time I-- if I start, if I pick a point that isn't in any basin of attraction, then I'll try to connect and grow a tree there. If I, however, pick a point-- if it's execution time, I say the robot's got to run from here, I pick a point. It's already in the basin of attraction that I just execute that trajectory. If, I think what you're alluding to is that it's in the basin of attraction of multiple points, then I pick the one with the lowest cost to go, because those are all estimates of the cost to go that are centered around that trajectory, OK. So for the simple pendulum, with damping and torque limits and everything set the way it was, this little randomized algorithm can fill the space with basins of attraction with just a handful of trajectories. AUDIENCE: [INAUDIBLE] PROFESSOR: I've never said it with so much-- [LAUGHTER] --such dramatic force. OK? [CHUCKLES] It's the highlight of the class right here. OK, so this is exactly the feedback motion planning idea that I'm most excited about right now. Because we can suddenly-- for LQR controller, it depended on-- the thing we've worked out is, if the cost to go function is this, or in the time varying case, this, then I can come up with a very nice representation of the basin of attraction based on just a scalar value. And I could just start designing funnels through my state space. And the vision is, if you can think about the funnels as you build them, then you actually don't have to build too many trajectories to start filling the state space. AUDIENCE: Why do you call it feedback motion planning. PROFESSOR: Yeah. It's because I'm thinking about the feedback control, which is the funnel, as I'm doing the planning. Yeah. Do you agree why Koditschek's version is feedback motion planning, or do you not like that being feedback motion planning? AUDIENCE: It makes sense. I guess I'm used to different funnels. PROFESSOR: That's true. You are, yeah. AUDIENCE: [CHUCKLES] PROFESSOR: OK? AUDIENCE: [? Think ?] [? so. ?] PROFESSOR: So these are very much-- in Koditschek's case, there's no debate that each funnel is a feedback controller. I think of this as the same way. You could argue with it, because it's centered around trajectory design, which his is not. So this one has a little bit more of a feel of conventional motion planning. But by virtue of thinking about the feedback as I design the trajectories, it means I have to build less trajectories, yeah. So it'd be nice to actually have the conversation about how these are related to float tubes. Mm-hmm. It's pretty similar in some ways. But these are very effective to compute the stable-- the basins of attraction, so I think it's relevant. Yeah. AUDIENCE: Could you factor in actuator limits into the Lyapunov function? PROFESSOR: Yes. So OK, actuator limits in the Lyapunov function are harder. So what you do is you-- everything is based on a Taylor expansion of the dynamics around the nominal. So a hard limit, if I linearize and I don't see that limit, then I'm not going to know about it. So there's a couple things you could try. And actually, I recommended to Mike earlier today that he should do this for his final project, is to do that, the case where the actuator limits. So you could imagine making a soft limit, some sigmoidal limit, and having the gradients of that visible from your linearization point. Or you could imagine the LQR design that actually does both the quadratic cost and the bang bang synonymously. Haven't done it yet, but I think that's consistent. OK, so I told you a lot about local trajectory optimizers. And today we said there were at least three good ways, I think, to make those trajectory optimizers into a more feedback plan. So the first idea was real time planning. And if it's fast enough, well, then we're all out of jobs, because we could just do that. The second idea was building these trees and doing multi-query, keeping your tree around and just finding your way to the closest point of the tree every time you execute. And that has the nice feature that every time I execute, my tree gets a little bit bigger, and I know a little bit more about my robot and myself, yeah. And the last one was this feedback motion planning, which there are only a handful of ideas out there, I think, about feedback motion planning that people use. Koditschek's funnels are definitely the most prominent. And actually, I think that the funnels should probably be on my list, but I haven't-- sorry, the float tubes should probably be more on my list, but I don't-- I've never made a strong enough connection. We should make that a goal for the rest of the class, yeah. AUDIENCE: [INAUDIBLE] float tube or-- PROFESSOR: So there's definitely differences, but we should really figure it out. So [? Brian ?] [? Williams' ?] group does planning with float tubes that are, in spirit, similar to these funnels. Yeah. And so we should talk about whether you can design the float tubes for the class of systems that I care about in the class and stuff. Mm-hmm. Excellent. OK, so you saw the email about the projects. If you have any questions about your projects, we could talk for a minute right now, or we could schedule a meeting before Thursday. There's a few ideas on the email we sent in the PDF that we sent out. If you're looking for more ideas, I've got a list of other ideas that I'm happy to share. It's going to work best if you find a problem that you're passionate about, something that you got excited about in class or from your work, and you apply some idea from class. But the goal for Thursday is to say enough about it that I can give you some real feedback on your half-page write-up and try to help you with the scope and topic to make it a good project. OK? Let me know if there's any questions. See you Thursday. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_8_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: So welcome back. I thought we'd start today sort of with a little bit of reflection, since we have covered a lot of material. Even though we've kept it to simple systems, we've actually covered a lot of material, and we're about to blast off into some new material. So I thought, let's make sure everybody knows what we've done, roughly how it fits together, and where we're going, OK? So I've been trying to carry through the course two main threads. One of them is sort of the systems thread. We start with pendula. We're getting to acrobots, cart-poles. We're going to get more and more interesting systems. In parallel, I'm trying to design this optimal control thread which tells you the way I think you should be solving these. Along the way, I'm throwing in lots of puzzle pieces about partial feedback, linearization, energy shaping, things like this. That's because this is a hard-thought decision. I mean, this is a research class, really. So I'm doing my very best to teach all this material to you as if there's a textbook on this. But in fact, there's no textbook. So what I've decided to do roughly is that, I'm trying to give you a very clean line of thinking through the optimal control. But I do want to keep throwing in what other people do-- the domain-specific knowledge about acrobots and cart-poles and walking as we get to it. Because I think these puzzle pieces-- ultimately, there are things we can't do with optimal control yet. My guess is that ideas from partial feedback linearization are going to help. I think ideas from energy shaping-- these kind of ideas are going to work together. So even though those aren't going to connect up perfectly in this class, what I'm hoping to do is give you all the pieces of a puzzle that nobody's actually solved yet, give you all the information I can give you about this class of problems so you can go off and write, well, final projects that are cited a hundred thousand times. OK. So let me just make sure that's happening. So that's a tall order to sort of carry those threads. So let's make sure that's happening. Maybe I'll even use a whole board for it. So I think I've made no secret of the fact that I think optimal control is a sort of defining way to think about control for even these very complicated systems. All right. So we've got one thread that we'll continue-- we'll get deeper and deeper in about optimal control methods. In particular, we've already talked about sort of two fundamentally different approaches to optimal control. We've talked about optimal control based on the Hamilton-Jacobi-Bellman equations, where we talked about the value function being described by a nonlinear partial differential equation. And then we also talked about Pontryagin's minimum principle, which you're probably all working on right now-- Pontryagin's minimum principle. These were two sort of analytical optimal control approaches, right? As such, unfortunately, we only showed you much of anything on linear systems where we can actually solve those problems analytically. What were the big differences? Maybe I-- just to motivate this so you make-- so make sure everybody pays attention. I should also say a few things about what I hope you get out of the class, and for instance, what will be on the midterm when it comes around. So I know that we're throwing lots of ideas out. If there's one thing that happens, one thing that you should be able to do off the top of your head is think about and talk about how these different tools that we're putting out relate to different problems. And for instance, if I were to give you a problem, you could make some reasonable guess at what kind of-- what methods that we've talked about might be most suitable for that problem. The details of how you do a partial feedback linearization, I wouldn't expect you to absorb every piece of that. I would guide you through that on a problem, but with the expectation that you've worked through it once on a problem set and sort of have some ability to do that. But if there's one thing I want you to come away from this class with, I want you to understand the suite of tools we're talking about and have a sense for what you'd apply to what problem. So in the Hamilton-Jacobi-Bellman equation, we applied it to-- we applied that-- which, remember, was this partial differential equation, with a hard nonlinearity, which describes the optimal solution. I should put stars on here to be here careful here. The optimal cost-to-go is described by that equation. So one approach to these things is to directly try to compute solutions to this partial differential equation. We did it analytically for the quadratic regulator problems, the linear quadratic regulators. Right. I didn't actually do it for the minimum time problem, because the minimum time problem, we know that the optimal solution in J is actually not-- its gradients are not well-defined across the entire-- for all x. That's the only reason I didn't do it for that. But for sort of smooth problems like the linear quadratic regulators, we could use these methods. Pontryagin is a little bit more general. This is the one with the adjoint equation. So you defined the Hamiltonian is here some cost function plus Lagrange variable, Lagrange multiplier times your dynamics. Pontryagin was more powerful. In some sense, we solved harder problems. We solved, for instance, the minimum time problem, for the double integrator, at least. The problem with it is just that it was too local. The Pontryagin ideas are based on a gradient statement of local optimality. So we said along some trajectory, I can verify that if I change my control action a little bit along this trajectory, my cost is only going to get worse. So that's a necessary condition for optimality. That's when we can do a lot of things with-- but it's only going to give me a local optimality statement. Are people sort of OK with those two methods we've been doing? OK. From the Hamilton-Jacobi way of thinking, we ended up with our first algorithm. Right. And we said that you could discretize the dynamics, and in the discrete dynamics solve whatever nonlinear problem you wanted, pretty much. Right. And the reason we could do that is because these Bellman equations have this nice form that-- there's a nice recursive form. It was that J at some x, n is just the min over u. This is now-- maybe I should be very careful when I write it in discrete, because that's what we talked about it in min over actions A, discrete actions A, S A plus the J of S prime, where S prime is what I get for doing this. OK. So if you sort of look at where there's white space left on this board, you might be able to guess what we're going to do next, our next big piece of the puzzle. We're going to derive our first set of algorithms now from the sort of Pontryagin methods. And we're going to talk about some policy search methods, the most important class being of trajectory optimization. OK. Along the way, we've been sort of following this systems, we're developing these systems, right? In the optimal control, the analytical optimal control, we mostly just thought about double integrators. But those things could have applied to any LTI system. So that's supposed to be in line with that. I'll do my best to not move the board too many times in this. We moved on to sort of pendulums, pendula. And we did the essential things. This is where I used to sort of tell you about dynamics. We talked about nonlinear dynamics, basins of attraction, all these things. And then we did acrobots and cart-poles, where we talked about a lot of interesting ideas. We talked about controllability, we talked about partial feedback linearization, and we talked about energy shaping, even tasks-based. Lots of ideas in there. Those ideas are my attempt to extract the most general. But the main topics of the sort of acrobot/cart-pole world, in acrobots and cart-poles, people talk about PFL heavily. They talk energy shaping, they talk about these kind of things. I only presented the ones that I think are going to be useful in general. But to some extent, right now you could think about these techniques as being orthogonal to our main line of thinking, OK? Now, like I said, this is a puzzle that nobody has the answer to yet. So actually, what I want you to be thinking here is, what can we do with all these things? So for instance, I told you that dynamic programming works well for low-dimensional systems. Nonlinear systems, no problem. Low-dimensional, I can discretize the space, I can just run my algorithm, bup-bup-bup-bup-bup, compute the optimal cost-to-go optimal policy. OK. But so maybe there's obvious things to do. And actually, I think there are obvious things to think about-- research questions that you could be thinking about for your final projects, right? So for instance, you know, we used partial feedback linearization to take-- at least linearize part of the dynamics of the system. So this is wild speculation here, but let's say I have a problem that I could then describe as x1 dot is A1 x1 plus A2 x2 plus Bu then x2 dot equals f of x1, x2, u, where, let's say, the dimension of x2 is much, much less than the dimensions of the whole original system, which is what I get-- that's the result. That's the result I would get from doing a partial feedback linearization, where I have, let's say, for Little Dog, and I have five degrees of freedom and four actuators, that I'd end up with sort of one very nonlinear thing and then a bunch of very linear dynamics after doing a partial feedback linearization. So fantastic research question-- so could I use that trick and combine it with dynamic programming, let's say, to use dynamic programming to solve the hard part and do some sort of LQR to solve the easy part? I don't know. Probably you can. I bet you can. And I'd be excited to think about, with any of you, you know, what you could do. But these are the reasons I'm saying things like PFL, right? So imagine doing PFL plus dynamic programming. I'd bet you could do higher-dimensional optimization if you could exploit tricks like that. So that'd be a fantastic sort of research question to think about for your project. Are people sort of OK with how this is going? Yeah? AUDIENCE: Are any of those [INAUDIBLE] dimension that [INAUDIBLE] convergence or-- RUSS TEDRAKE: Good, OK. So they're not explicitly-- these are not explicitly targeted to solving optimal control problem. So proof of convergence, we have to define what we mean. I mean, PFL, certifiably, provides this sort of a dynamics. The energy shaping, under some conditions, we showed we could regulate the energy of our systems. So there's-- each of these have their own sort of proofs and task space. But most of them are not directly aimed at proving that you've obtained some optimal policy. AUDIENCE: If you obtain a kind of policy which would [INAUDIBLE] goal. If you have a specific state that you want to be in [INAUDIBLE] to get to that, maybe not optimally but-- RUSS TEDRAKE: Good. So in the energy shaping I talked about for the cart-pole-- the energy shaping plus PFL that I talked about for the cart-pole and swing-up, there's a citation in the notes of a guy that proved that for a set of parameters-- well, I'm being a little flippant-- there's a little-- there's a regime where that's guaranteed to work. But the general proof that you'd like to have is not there. For the acrobot, I know about even less proof to that. There is one particular controller that we implemented that someone who took the class implemented before. And it does have a Lyapunov proof saying it'll get to the top. But it sort of works by going-- this is an acrobots. It's going ee-ee-ee-ee. It's like really unattractive. So it does something really stupid. You wouldn't want to run it on your real robot, probably, unless you're very patient. But it has a proof to get to the top. So these are the trade-offs that people make. OK. Right. So I hope that sort of-- I hope that was worth doing. I just wanted to quickly make sure we're all calibrated. So let's talk a minute now about-- so I said that for the dynamic program we showed it working on the pendulum. I put a big asterisk saying that there's discretization errors present. So even for the pendulum, it'll solve it lightning fast. But it's solving the discretized system, not the continuous system. And the optimal policy you get out could be different than the optimal policy for the continuous system. OK. So can you do dynamic programming for the acrobot and cart-pole? That's an obvious question. So the answer is yes. People have done it. My code on the acrobot and cart-pole runs in a few seconds. It's not a problem of dimensionality. But the results are not-- of my code-- are not satisfying because of exactly the asterisks I put on the pendulum. So let's just sort of evaluate dynamic programming as we go forward. So absolutely, the acrobot and the cart-pole both have four-dimensional state space, one-dimensional action space. Easily discretized these days. That's still low-dimensional enough that I can actually bin up the space. That wasn't true. When people weer doing it in the '80s, but it's true today. OK. The only real problem with it is that there's discretization error. And essentially, the discretized dynamics can be a poor representation of the continuous dynamics. If you spend a lot of time with the acrobot, you find actually the acrobot's dynamics are pretty-- are sort of wicked in a lot of ways. So actually, spending a lot more time about it recently-- I mean, even just sort of making sure that energy is conserved when you put no torque into your system-- this is the basic absolute thing you do to make sure you got your equations of motion-- the acrobot, you have to put your integration tolerances up the wazoo to make sure you conserve energy. I mean, sort of [INAUDIBLE] resolution, the relative tolerance in the ODE solver in Matlab had to be something like negative-- 1 to the negative ninth or something to make this thing integrate and look like it had a flat line and energy as it's just swinging around with zero torque. As a consequence, when you discretize it, and you run your optimal control which converges nicely on the discretized system, the same-- you can't discretize it with sort of 10 to the negative ninth precision. And you'll find that your discretized system doesn't conserve energy, for instance. So that's what's one of the major shortcomings. OK. And so for that reason I'm going to move on in our-- when we're going to talk about the optimal control for the acrobot and the cart-pole, we're actually going to use some other methods. But I don't want to move on without seeding the idea that there are good ways-- there are ways to fix this, potentially. So for instance-- I mean, I think there's lots of good work to be done in sort of the dynamic programming world. I think there are ideas from discrete mechanics and from finite element methods, where people have thought a lot about the consequences of discretizing PDEs and trying to, for instance, conserve quantities like energy. My guess is if someone had some excitement or experience with these sort of methods, I'd bet the next time I teach the class I can say it works for the acrobot. So this would be a great final project, yeah? And publication. Just do a smarter discretization of the dynamics, and then sort of-- so the discrete mechanics philosophy is that if you've taken your Lagrangian, and you've turned it into x dot equals f of x, u and then you discretize, then you've done-- you've already-- it's too late. You've already killed the beauty of the Lagrangian. And the discrete mechanics point of view is you should discretize the Lagrangian, do discretization up here, and then carry that down to your equations of motion. And these sort of discrete mechanics principles tend to have much better properties and energy conservation and stuff like that. We might get to it in our trajectory optimization family, but there's a line of work now called discrete mechanics and optimal control that's been done by Marsden and all at Caltech that I think that's my best lead right now and how to fix these problems. OK. There's another idea out there for how to fix these problems. If you judge your problem is just that your discretization is bad, another big idea sort of is variable resolution methods, which says, let's stick to our guns, discretization is going to work as long as I have enough resolution. And because of computational limitations, I'm just going to make sure I put the resolution in the right places. So if you discretize the pendulum or something like this, when you do your optimal control solution on this, and you find out, for instance, that when you're transitioning from this point-- this way, energy is not conserved. Or the value function at these corners are very, very different. So it looks like there's something more going on there, then let's just add more resolution there, until-- as much as necessary, sort of, to capture the dynamics. There's a nice line of work in this by Munos and Moore, the same people I listed for doing the barycentric interpolation. They talked about variable resolution DP methods. I think Woody thinks that this is our-- this is the way to get value iteration to work on at least the minimum time problem for the pendulum and for the brick-- that if you use the right splitting criteria-- and that's the big question, I think, in this work is, what's the right splitting criteria-- then you can actually maybe make serious progress on these problems. OK. So I've given you a line of thinking about one class of algorithms dynamic programming. We showed how they could apply to the pendulums and decided to not show how they don't quite work beautifully for the acrobot/cart-pole. If I build an algorithm that took overnight to run, then I think it works. But they don't run sort of in real-time in the class, so let's leave it as future work to make better to stay with programming algorithms for pendula and acrobots. And I'm very, very serious about this. These are not killer problems. These are problems that-- I mean, these tools sort of that you see in the class, I think, just really haven't been put together before. I think that we're lining up all the tools to solve these basic problems. I wish they were all solved already. We've just got too many problems and not enough time. But I mean, you could solve this problem in your final project and make serious contributions to the field. This is the state that the field is in. OK, good. So change the world. Do that for your final projects, ideally. That's where we've come from. Let's start thinking about-- let me just bite off today the next big chunk, OK? You're going to finish this class with sort of a Chinese menu of tools that hopefully will help you solve all your problems. OK. So analytically and computationally, there are two major approaches to solving these optimal control problems. Let's even just say computational optimal-- numerical optimal control. The first one, like I said, is you're trying to solve a PDE-- Partial Differential Equation. The particular name is the Hamilton-Jacobi-Bellman equation. But there is a second approach. The second approach is policy search, direct policy search. Very much still is governed by the partial differential equations of optimality. But the solution technique is different. Here's the idea. Let's design not directly the-- I mean, we've designed our cost function. We have our cost function, we have our dynamics. Let's not-- let's design a family of control systems with a bunch of parameters, OK? So the policy search methods define some class of feedback policies that you care about that are parameterized by some vector alpha. We define a class of feedback policies. And then using the same exact formulations we used before, where we used J to represent the long-term cost of taking and starting with some initial condition at some time, which could be-- this is what I wrote down before. Now I'm going to be even more specific and say, let's make J of alpha x0, t. I'm going to say it's a function of the parameters. And I'm going to say that u is now the cost of evaluating my control system with parameters alpha. So it's a little abstract right now, but let's make it concrete. So here's a couple of potential parameterizations, right? So we talked about the linear family of feedback policies, linear feedback control with some big matrix K. Well, that's a perfectly acceptable policy parameterization. If I want to search over the class of feedback policies that are linear feedback policies, then I could call that pi of alpha x of t. It just happens that that control policy is alpha 1, alpha 2, alpha n times x. It's a perfectly reasonable class of control policies. Actually, just to throw it out there, it's probably a bad choice, actually. Because I think people know that even sort of LQR problems are not convex in this parameterization. I haven't told you how we're going to solve it yet, but let me just throw out the fact that I think most sort of serious control people wouldn't use this as a representation to search over, because it turns out the relationship to performance based on these parameters is complicated. Maybe unnecessarily so. A much-- a very common parameterization is sort of an open loop control tape, I'll call it, where in the simplest form, let's say u is just is that a reasonable way to write it? And every time, I just-- this would be a zero-order hold. And at any time, I just find the closest sort of point in my control tape. And I'll put-- so I've got alpha at time 1, I've got alpha at time 2, alpha at time 3 just in the tape. And I just-- as I run my policy, I ignore state, and just play out an open-loop tape. That's a perfectly valid policy representation. Maybe a better one would be something based on splines. Or even just a smoother interpolation, maybe that could be better. People use things like neural networks as policy representations. There's a lot of work on things like radial basis functions. And in general, a lot of sort of kernel methods in machine learning you can use sort of-- the point of this line is you can use general machine learning function approximators. And those tend to be reasonable policy representations, where maybe the weights in the neural network, even if you don't know what these things are-- I'm actually going to do a little bit of an introduction to function approximators once we start using them heavily in class. But just from seeing the words, even if you've never used one, you probably have a sense that these things are sort of ways to represent functions with a lot of parameters. And those are perfectly good candidates. So the key idea here is, if we're willing to parameterize our control system with a class of some parameters, some finite parameters, then I can turn my optimal control problem into a simple parameter search. In general now, if I want to minimize-- the problem is to minimize over alpha, let's say J alpha from the x0 I care about at time 0. I could describe J through those equations, through some Matlab function, let's say, and just say, find the minimum of this function. You can do it with sort of fmin or various-- any old tools from nonlinear optimization. Seem reasonable? It's important to make sure we understand why it's different, OK? So-- do I still have this up on the board? AUDIENCE: Are these [INAUDIBLE] approximators [INAUDIBLE] x as input and so y is output [INAUDIBLE]?? RUSS TEDRAKE: Potentially x and time as an input and u as an output. You're trying to-- the function approximators represent this function, right? They're mapping which depends on parameters alpha from x and t in the general case to u. in many ways, this is a very naive approach. The dynamic programming view of the world is very beautiful. We turned our complicated long-term optimization of this function into a recursive form, where at each step I only had to think about my instantaneous control action. I did a min over u for that one step, and that was end to end, if I could solve my value function, then that was enough. I could use my value function to turn my long-term optimization into a short-term optimization, min over u. Tell me if I need to say things differently. In many ways, this is the dumb approach. We're not-- we're throwing away the structure in the problem. We're just going to directly search over parameters. The saving grace is that I don't have to-- the value function can turn out to be a hard thing to represent, especially if-- with dynamic programming, I can't represented in 10 dimensions, let's say. So this dumb approach can actually work in more complicated systems. The only problem is it doesn't guarantee global optimality. Like I said, in some ways it's a very naive approach. It tends to scale better-- it's not as sensitive explicitly to the dimensionality of the problem. There's another nice thing about it, which there's no explicit need for discretization, which I told you was a big problem in the dynamic programming thing-- except for there's discretization potentially in the ODE sort of integrator. And that can make-- we do know how to make that arbitrarily accurate. The only real killer of these methods is that they don't-- they're very subject to local minima. Yeah, please. AUDIENCE: This function approximation, isn't it like discretization of your state space? [INAUDIBLE] RUSS TEDRAKE: Not necessarily. AUDIENCE: I mean, but you essentially-- you don't have full control [INAUDIBLE] available [INAUDIBLE]. RUSS TEDRAKE: It's a good question. So take the linear feedback example. If I have a problem that I know the-- if I take an LQR problem and I solve it with policy search, and I know the feedback policy should exist in the class of linear [INAUDIBLE] things, then I haven't lost anything by doing an approximation. And in general, these things are-- so radial basic functions have the feeling of similar to discretization. But some of them are much smoother and much more continuous than these hard discretizations. And the way that you evaluate them, which is what's essential, is you're still going to find-- so if I evaluate the system by literally taking my parameters of my neural network, radial basis function, whatever, running this function without any discretization, then it'll give me an accurate measurement of this function. Discretization comes into the-- doesn't come into the evaluation of the function. In dynamic programming, it's fundamental. Discretization is right there. We always operate directly under discretized system. So I do think these things are much closer to being continuous solvers. You might say another disadvantage is that it doesn't exploit the recursion that we know to exist in the problem. So it sort of feels like we should be able to use that trick more generally. And a lot of times, these methods are going to be the very naive things which throw them away. AUDIENCE: You said DP requires the discretized space? RUSS TEDRAKE: Yep. That's what I said. Do you disagree? AUDIENCE: [INAUDIBLE] can be [INAUDIBLE].. RUSS TEDRAKE: Well, then I would call that an approximate dynamic programming method, which is-- these are the-- it depends where you draw the line. I'm going to talk about those in the reinforcement learning part of the course. But the thing that I think people-- I think that we talked about, which has guaranteed results for the discrete system, which is sort of really dynamic programming, discretization is exactly fundamental. So Alborz is pointing out that actually there are people that use function approximators in dynamic programming algorithms. And we're going to talk about those in the future. But they tend to be approximate. A lot of times, they have weaker guarantees of convergence. But we'll talk about those as they come up. OK, good. So now we have a very simple problem. We've taken our optimal control problem that we've thrown all kinds of work into. And we've talked about the recursion, we've talked about the Bellman equation. And now we just said, OK, might as well just think of it, that-- if I run my robot with three different parameters, I'm going to get three different scores. If I literally take my acrobot and I make the parameters all 1, then I'm going to get some score for running that policy. If I change my parameters so the third parameter is 2, I'll get a different score. And I'll get a different score if I run a different set of parameters. I'm just going to run my trial for, let's say, for 10 seconds with different sets of parameters. And this is going to give me some landscape, which is J of alpha. Now typically in these problems, I'm going to be thinking about optimizing it from a particular initial condition. We can talk about later how-- if you want to get around that. But this is just some function J of alpha, right? So how do we optimize J of alpha? Well, there's lots of good ways, from nonlinear programming, from nonlinear optimization. What are some good ways to find the minimum of J of alpha? Guess a lot of alphas, that's one approach. Pick the smallest one. OK. AUDIENCE: You can form J in a way which is, you can take [INAUDIBLE] dynamics smooth then we can solve [INAUDIBLE]. RUSS TEDRAKE: OK, good. So let's say I have an initial guess at J, and I can actually compute the derivative of J. Which I can always do, because I could do it numerically if I wanted to, right? I can just evaluate it a bunch of times to do it if I had to. But if I can compute dJ d alpha, then that'll tell me that slope. And I could, for instance, do gradient descent on that slope. I could do alpha-- my second alpha that I'm going to try is going to be my first alpha. I try minus some movement in the direction of the gradient. I could take this, estimate the gradient, and then take a motion that moves me down the gradient, make a new update, move down the gradient, make a new update. And eventually I'll get to the minimum where the gradient is equal to 0. How many people have used gradient descent before in something? OK, good. So nobody actually does that, I don't think, anymore. Because we have sort of-- I mean, that's absolutely the right way to think about things, and gradient methods are critical. But you optimization theory has gotten pretty good. So there's another way to do it. Let's say I had-- I couldn't just-- I not only compute the first derivative, but let's say I could compute the second derivative. My initial guess here, that could be the first derivative. And it could be the second derivative. Then what could I do? AUDIENCE: Fit a parabola to it? RUSS TEDRAKE: Fit a parabola to it? I didn't quite hear what you said. Is that what you said, too? AUDIENCE: Steepest descent. RUSS TEDRAKE: Absolutely. Let's fit a quadratic bowl to it, right? And actually, the problem I did right now, probably the quadratic bowl is a pretty good match to the real optimization. And why not move directly to this point and then fix-- find a new quadratic bowl and move directly to that point. So this would be a second-order method. OK. And so Newton-- a lot of people call it the Newton method. Turns out it works just as well in high-dimensional systems. If I have a bunch of alphas, I can do these second-order methods. And doing this, in general, is what is called sequential quadratic programming. Yeah. And they tend to converge-- there's an additional cost, potentially, of computing that second derivative. But you can do it by remembering the past-- the same way you can remember your-- you can estimate your gradient by remembering a couple of samples and just doing a numerical gradient. You can remember a couple of samples and compute the-- estimate the second derivative. So I'd say probably the most common method used right now-- how could I say that? But one of the very common methods is to try to compute these analytically, because-- I'll show you a good way to compute those. And then these, which could be put could be potentially more trouble to compute, we'll just use sort of a numerical secant method to collect our second-order terms, and then use sequential quadratic programming. The thing that makes sequential quadratic programming better than sort of the naive gradient descent is that it's faster. But the real thing is that optimization theory is just this beautiful thing. Now I can take constraints into account very simply. So let's say I have-- this is sort of a crash course in optimization theory. But I think you can say in a few minutes most of the key ideas. I mean, to be fair, most of the lectures we've had so far, you could take an entire course on each one of those lectures. So pick your favorite, take another course. But, you know, I think that's-- I think it's useful to have the courses that go over a lot of topics, and that's what this is. So what happens if I now have, if I want to minimize over alpha J alpha subject to some constraint, let's say-- I'll just call it-- I'm running out of letters here-- A of x equals 0. We know how to formulate those with Lagrange multipliers. But in general, finding equality, solving for equalities, that's just root finding. That's actually no more difficult than finding minimals. I can use the same Newton method to find-- to do root finding. So if I have some constraint, let's say, alpha-- oh, that was a really bad choice. Let's call this something other than A. Let's just call it f of alpha, just so I keep my dimensions in the same direction here. OK. If I want to find-- I'd better make it go through 0. If I want to find the zeros of that solution, I can use the same exact gradient updates, right? I can define a zero crossing if I have an initial guess at the system. I take the linearization, its' going to give me a new guess for the zero point. I take the derivative there, that'll get me close. That's the Newton method for root finding. OK. So by knowing the gradients, you could sort of simultaneously do minimization and root finding to satisfy constraints. Long story short, if you have a problem that has the form minimize alpha subject-- some function J of alpha-- it's potentially nonlinear, but you could take its gradients, let's say-- subject to linear constraints, equality constraints, or even inequality constraints, you can just hand that these days to some nice solver-- some sequential quadratic programming solver. The one we use these days in lab is called SNOPT-- Sparse Nonlinear Optimization Package something, I don't know. OK. So you could start solving optimal control problems by literally saying, OK, if I run this-- just telling it J, telling it the gradients of J if you can. That'll make it faster. Even if you didn't, you could just say, here's J, find me the minimum of J. You hand it to SNOPT, it'll go ahead and do a lot of work and come up with the best J, which is going to be some minima of this cost function. There's no guarantee that it won't find this one. It's subject to local minima. But sequential quadratic programming methods tend to be better than gradient methods in avoiding local minima, because, for instance, if I'm here and I estimate the quadratic bowl, if I just take bigger steps, then I tend to jump over some small local minima that a gradient method might get caught in. So just experimentally, people know a lot about how it works on quadratic programs if they're actually-- if the system is actually quadratic. If it's a nonlinear system that you're approximating as quadratic programs, then they sort of wave their hands, but it works really well in practice. Yeah, OK? So we have a new way of solving optimal control problems. Just write the function down in a function that SNOPT can call. Give it a set of parameters alpha, it'll churn away and find alpha. All that's left for us to do in this class is figure out the best way to hand it to SNOPT. We want to make SNOPT's computation as effective as possible. And there's a lot of different ways to do it. So the first way is literally parameterize your control system, called SNOPT. But let's at least be smart about computing the gradients. Let's avoid asking our nonlinear solver to compute the gradients for us numerically, because we can give you those analytically, exploiting the structure in the additive equations, the additive cost optimal control equations. And as it turns out, it's a direct and clear descendent from the Pontryagin minimum principle, OK? So I'm going to show you lots of examples of these things working on Thursday. But I thought today, let's just make sure that the basic idea of what we're doing here comes through, this policy search, and show you how to compute those gradients. In fact, let me just tell you the result first. I think that works sometimes. OK, so given J-- I'll just leave off that end condition for now, the terminal condition. The goal is to compute partial J x0 partial alpha. My claim is I can compute that very efficiently by integrating the system forward from 0 to t backward from t to 0, and then I'll get my gradient. OK. It integrates the system forward, just like you would do it, run any old simulation. But while you do it, keep track of a few key variables. Similarly, g of x. It's. Anybody recognize that equation? It's written a little bit different form, but. AUDIENCE: Filter equation? RUSS TEDRAKE: It's not a filter. Well, it could be interpreted as a filter of something probably, but. It's an equation we've seen before. AUDIENCE: Adjoint. RUSS TEDRAKE: It's the adjoint equation from the Pontryagin. OK. OK. Then the gradients-- OK. So I'm done writing for a second, let's talk. Do you remember the story from the Pontryagin? The derivation sketch I did, we said that we had some functional, right? It was that if we change our control actions, we want to make sure that changing our control actions at all doesn't increase the-- doesn't change the constrained minimization of J subject to the constraints of the dynamics. y, in that derivation turned out to be the Lagrange multipliers that enforced the constraint. OK. What they did was they put the system-- by making sure that this equation was satisfied and this equation was satisfied, we made sure that we were at a stationary point, at a minima of our functional, our constrained functional, of our Lagrange multiplier equation. OK. It's exactly the same reason we're doing it here. We now have a functional which depends on J. This functional J, the Lagrange multiplier functional. And the derivations are in the notes. I won't do it again. By going backwards, by going-- integrating forward, we ensure that this constraint is satisfied. By integrating backwards, we solve for the Lagrange multiplier. What we're left with is we can now, since the gradient with respect to Lagrange multipliers is 0, the gradient with respect to the state equations are 0, the only thing left is the gradient with respect to the parameters alpha. And it turns out to be this sort of very simple equation. So it's this beautiful thing right that actually-- I hope this is-- it's a lot to write on the board real quick, but it's actually a pretty straightforward algorithm for computing the gradients, efficiently computing the gradients partial J partial alpha. All I have to do is simulate the system forward, simulate this gradient equation backwards, and I'm left with a direct function alpha, OK? How many people have worked with neural networks before? Yeah? OK. Well, this is the back propagation. This is the back propagation algorithm for neural networks. Turns out to be exactly the same. This is the continuous time form of it. People have worked on it and back prop through time for recurrent neural networks. But the exact way-- the reason the back propagation-- so there was this revolution in the mid '80s about-- that basically suddenly, everybody said neural networks will solve any problem. some People still say that today. The thing-- the only thing, really, that happened, I think, from my point of view, is that somebody came up with an efficient algorithm for computing the gradients of the neural network weights as a function of the input/output data. It's exactly this idea that you can march the system forward and then integrate backwards. In that case through a big neural network, you had to integrate these equations backwards. Being able to compute those gradients faster was enough that the world started saying neural networks are going to match the computational intelligence of the brain and solve AI and all these things. So it's a little dry, maybe. But this is potentially very enabling to be able to compute gradients efficiently. It could change what problems you can solve. OK. Are people OK with the big picture of where things are? Yeah? Good. So on Thursday, I'm going to show you, now that we know how to compute the gradients efficiently, I'm going to show you this put to work, the intuition of sort of changing a policy, searching in policy space to solve problems like the acrobot/cart-pole, and some simpler examples. And the dumb idea is, let's just make it a straight, nonlinear optimization problem over alpha. And I'll try to help you compare and contrast the way that works compared to the dynamic programming. See you then. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_18_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN W. ROBERTS: Our investigation of stochastic gradient descent. This time, we'll be talking about the stochastic policy interpretation. And I have some examples. Well, a very simple sort of system example. But a whole bunch of ways of looking and fooling with it in policy parameters that hopefully give you some feel for how to actually go about using this, if you want to use in your project or something like that. So here. OK. So I gave a little introduction to the idea of a stochastic policy last time. But this time, I'll give maybe a little review of the things and also a sketch of the algorithm. I think someone asked, Joe asked for pseudocode. To get a feel of the process you need to go through. So just in sort of words, this is the process. I'll just get right into it. Set. Alpha. And again, there's a bunch of ways of actually implementing this and going through the details of it. But this is a very simple way and easy understand. And most of the other forms are pretty clearly derived from this. We can sort of tweak it maybe to get a little bit better performance or fill in the details in different ways. But set alpha to an initial guess. Wow. Sorry. So this is what you start your policy off as. You can start it as zero or a bunch of random numbers. And this can affect your results a lot actually. Good initial guesses will obviously converge faster. But also they can be a bit different base of of attraction for different local minima. So it also makes sense sometimes to run it several times from different initial guesses and just see where it goes. And if they all converge to the same thing, that means you're at least in a local minima that has a large base of attraction. While, if they are go to very different things, that means that you sort of have a local minima problem. And you can take the minimum of all those runs or you can try to investigate exactly why you keep on getting stuck. So the initial guess isn't just like come up with the best thing you can. It also means sort of map out the space of policy parameters somewhat so that, when you get stuck in a local minima, you have some confidence about how good that policy really is. So then this is again, you don't have to do this. But run a policy pi alpha. So this is the policy with your parameter set of your alpha. And store the cost as your baseline. So that gives you your first baseline. Then let's see. The Do While loop here. So do draw noise z. So if you remember before, maybe I should sketch this first one and fill both these boards with the sketch, z is the noise you add to your policy. So when you're searching, when you sample, that's the vector you add. I'll write out the details of it over there. So draw your noise z from some distribution. Now, yesterday we talked about the example of a Gaussian. That's a common one. But there's a number of other ways. And actually, today we'll talk about a different distribution and sort of why you'd want to use it. Run system with pi alpha plus z. So this is sort of when we're evaluating how well that policy does. And then we'll say we'll store this just in J. All right. Now, we do the update. Remember? That we were talking about. So alpha gets alpha plus negative eta J minus b. b is our baseline up here. So J minus b z. And then update your baseline. Now you can do this with a second run where you run your new policy. That's sort of the more expensive way. But you can have confidence in that baseline, as long as it's not too random when you evaluate the system. You could do a decaying average. You could have, for example, b gets 0.2 times your J plus 0.8 times your old baseline. Right? And so, this one is sort of an exponentially decaying average where I take my new one, smooth it out by averaging with the previous one. And then if you think about what this does, sort of every measurement gets sort of exponentially decayed away and its significance as you go through time. That's the update. And then you can do this while not converged. So many times, you can just do a four loop, too. And you can just run it for 100 iterations or several iterations and see whether or not it looks to have converged. That's what I often do. But this is really what I guess you're doing. It's just the four looping stuff is sometimes easier and prevents you from getting stuck where your convergence conditions ever met and you have to Control C. So hopefully this is clear enough that, if you wanted to implement this, you can go do it and try for your project if that's what you interested in. All right. So-- STUDENT: [INAUDIBLE]. JOHN W. ROBERTS: I'm sorry. This is et cetera. There's different ways you could update your baseline. You could do it with a-- yeah. Sorry. I mean, these are two common ones. I mean, these are the two that I usually use. But you could have critics and stuff like that, too, where you have more complicated updates for having more complicated baseline. All right. So when should you use this? We'll just say stochastic gradient descent. Well, it's important to remember, one, that it never does better than true gradient descent. So if it's cheap to evaluate the true gradient, you can follow that true gradient down. And you're in pretty good shape. Now the fact that it's random maybe gives you some robustness to very tiny local minima. But you could add that in your gradient descent, too, if you're just [INAUDIBLE] yourself around a bit. So stochastic gradient descent, if you have an easy way to compute the gradients, you can do gradient descent using those. Or you can even do something like SNOPT or some high order method. So if it's easy to compute the gradients, exactly. It doesn't necessarily make sense to do stochastic gradient descent. If those gradients are impossible to compute or extremely expensive to compute, then it can make sense. Then also, if you have a good parameterization, it can be quite efficient. Again, I guess this isn't a prerequisite because it will work with high dimensions. And just sort of like in naive parameterizations. Wow. You just leave params. But it'll be really slow. But is slow. And it can get stuck in local minima. So if you have a good parameterization, it can be much more reasonable to use this. An example is, in our lab, we have this glider we're trying to have perch. So we launch it out this catapult. It flies through, executes a maneuver, and then tries to catch this line. Right? Now, in that case, if your parameterization were the kind of open loop tapes we've previously used, where just what elevator angle did you try to serve it to, that could be a bad parameterization. It's a course that's going to demand sort of rough things. And so, trying to do learning on that can be very slow. Especially because evaluation is so expensive. So if we can come up with a good parameterization though, suddenly it becomes reasonable to do this. If we can knock down to five parameters that parameterize a very sort of nice class of policies, now maybe we don't need that much data. And we can actually get it just by launching it. So if we can come up with a good parameterization, doing this should work. Yeah? STUDENT: What if you had a good estimate of what your trajectory should be? Do you parameterize [INAUDIBLE]? JOHN W. ROBERTS: Well, I mean, you could start with a good initial guess, which maybe means that you converge reasonably quickly. But the thing is that the learning is going to struggle to get a lot of improvement. It will get some improvement. It'll show in these examples you have ugly parameterizations, it'll still learn a bit. But the problem is that, if you parameters in this open loop tape, let's say we have some really nice smooth trajectory like this. And we parameterize it. Is this all clear? Is this is sort of off on a tangent? But this is just sort of about bad parameterizations. So do you want me to go through all this first? And I can talk about the points in detail? STUDENT: Do you need a microphone? JOHN W. ROBERTS: I believe I have a microphone. [INTERPOSING VOICES] Thank you. All right. So are we all on board here? Or are we-- yeah. OK. Good. Yeah. This is sort of disorganized. So if you parameterize by setting all these numbers, you have a nice smooth trajectory. That's the kind of stuff you want. Now if you parameterize by saying each one of these numbers, stochastic gradient descent when it bounces around is going to be like, OK. Send this guy up a bit. Send this guy down a bit. These two guys up. This guy down, this guy down. Up. This guy back in the same place. You know? Something like that. And you get some param policy like this. Right? Now your physical system is going to smooth that out. It's going to filter it. Right? You're not going to execute something like that necessarily. But it's very unlikely that that's the right kind of policy. Right? I mean, it's very sort of fine-grained tick-tick-tick doesn't really make sense. So if you assume you are going to have something smooth, it's nice to do something that's going to be sort of always relatively smooth. If you were to use fewer points parameters as a spline, that could be a win. So that's what I mean by a good parameterization is something that ideally captures the kind of behaviors you want in your system. So yeah. I think the main point is don't-- well, this is not the main point. This is a big point that I think a lot of people tend to do, and I can be guilty of myself, is don't discard Back prop through time, RTRL with SNOPT. Many times, because you can see how simple this update is, it's very tempting just to be like, OK, well, I'll put this loop and run it. And I'll be in good shape. I mean, because it's trivial to write that code. I mean, you can see it from pseudocode here. So when you're working on your project, if you're like oh well, I want to optimize. I'll just throw this on there. Bam. I mean, it'll take you 30 minutes to get that code ready. But maybe you think back prop through time is more confusing, you have to do with all these joint equations and stuff like that. But these give you the true gradient. If your gradients are cheap, doing this can be a win. And then you can get a better policy. You can solve it a lot faster. You can check richer classes of parameterizations because it will solve them so quickly. So when you're thinking about, OK, we're trying to solve this problem and you have your project, don't choose this without being very conscious that it has a lot of limitations. And there's a lot of alternatives which could be better. So first think, are there very solid sort of things you can do with these using SNOPT converged nicely? If you can't, if you don't have a good model, if your system is too complicated, stochastic range descent maybe is what you want. All right. So that's sort of a discussion coming back from Tuesday on when to do these things and also hopefully code that now that you have it in your notes. Now that you have it in your notes, you'll be able to implement this pretty easily. So now getting onto new stuff for today, we're talking about stochastic policies. And in particular, we are talking about a certain class of these kind of algorithms. Reinforced algorithms. All right. And I think that these were introduced by Williams. He's at Northeastern in '92 or something. So once again, trying to get across the idea of the stochastic policy. So this interpretation is very different. We're not going to linearization. We're not going to assume we have this nominal policy, we sample and try something else. We're going to assume that we have some distribution of policy. Our policy is a distribution. So we can think about it as, we parameterize our distribution with alpha. Our policy then has some stochasticity to it, which is going to give us what used to be our alpha plus z. Right? So we know that this is not a sample. This is the output of our policy. This is how we represent our policy. Not as the nominal action, but as some parameterization of the behavior. Right. So this would be, for example, if I were to play a game where I was flipping a coin and I bet on something, it could be like, OK. My parameter here is the percentage of times I'm going to bet money on it coming up heads or something like that. I mean, that doesn't make sense because you're [INAUDIBLE] is zero or net zero if you're playing a fair game. But the point is that it's something like, this is the probability of doing this. And so here, we're going to think about this as the mean of a bunch of Gaussians. So our policy is do all these actions with the probabilities described by a multivariate Gaussian with means described by this vector. All right. OK. [INTERPOSING VOICES] STUDENT: So you just have alpha plus z, is the policy just to add noise to your parameters? JOHN W. ROBERTS: I mean, these are the two different interpretations. Right? So I mean, it is effectively doing that. I mean over here, we have our nominal thing. And then we add this noise. Right? Then we have our nominal policy. We add noise with distribution like this. But this one, what we can have is this is parameterizing a distribution. And our policy is to do these things with this probability. Right? STUDENT: And you said that alpha is the mean-- JOHN W. ROBERTS: For us, it's the mean of these Gaussians. But you could parameterize it any way you wanted. I mean, you could have a policy parameterized by, if you have discrete actions, it could be the probability of all these actions and force them to sum up to one. It could be some parameterization of a PDF or something like that. So we're going to talk about it in a limited case. And I think that thinking about that way, at least for me, is easier to think about where we're going. But remember, that's a lot more general. This is just parameterizing some distribution of actions. And so, it could be something very general. But if you're having trouble putting your head around what all that means, you can think about it as just like, this is describing the means of a bunch of Gaussians. And then our policy is do these actions with the probabilities described by that. Does that make sense? I think that this is sort of, I feel like I'm running around in circles here. All right. I just want to be clear. So in that case, it doesn't make sense to say, so this goes through our system and produces cost J. But now J is a random variable. All right. So it no longer makes sense to say, what is J for this policy? Because running that policy is going to produce all sorts of different Js. Right? So when you evaluate the policy, I mean you could have a question of, what is the right way? Should you do the min of J? Should you do, what is the worst J it's going to produce? Or what is the maximum J it's going to produce? Well, the thing that we generally choose in this world-- well, not Earth, but our discipline-- is you want to look at the expected value of J. And so, this is a philosophical decision to some degree. I think Russ may have talked about this a bit. But when you're an airliner, you probably want to look at the min of J. When you're building a gambling robot, maybe you do want the expected value of J. So we have a lot of money. Right? And so, this is a philosophical decision. But it's analytically tractable. And it makes sense in many cases. And again, like Russ says, animals aren't doing robust control. I mean, if they were, we wouldn't be sprinting around and jumping, doing gymnastics. Well, some people would be doing gymnastics. I'm probably not doing them anyway. But my expected value of trying to do gymnastics would not be high. I can guarantee that much. All right. So what we're going to look at is our stochastic gradient set now. Since we can't just dJ d alpha, we're going to have to look at d expected value of J d alpha. Right? So this is now our metric. And so we have to look at the gradient of that expected value. Now, this then is the definition of expected value. Right? We can call, I'm going to call this sort of like the policy parameters that you're running on under this trial. I'm going to call it beta in an attempt to be as unoriginal as possible. So we're going to call this beta. So the expected value for J we want to integrate over all beta. And your expected value is going to be-- I'm sorry. I've got an error in my notes. So J of beta, probability of beta. So this is just definition of expected value. Now, if you push the d, d alpha through-- yeah, sorry. If you push the d, d alpha through for a beta, what you're going to find is that J of beta is not a function of alpha. We're saying beta up here written up here is a function of alpha. I mean, that's how we've derived it. But remember that beta is just the actions you've chosen from a distribution. So if you look at running your system with these parameters, that's not a random variable. That is just your number. Right? So the thing that varies though is the probability of getting beta. Right? This is the function, this distribution. So this is what depends on alpha. All right. Now, this is where a bit of cleverness comes in. You can write this then as a handle over beta, J beta P. So you get the distribution beta. Then d, d alpha of ln of p of beta. d beta. Right? So this is just saying, when you take the derivative of your natural log, you're going to cancel out this one. Right? You'll get the derivative of the one inside. That makes sense? When you do a chain rule on this, you're going to get one over P beta d P beta, d alpha. So this is sort of the trick that you need. So you should see this. All right. I can erase the pseudocode. So once you do that, if you look at the expression over there, what you can see is that it's the same as the expected value of J beta d ln P beta d alpha. Right? Because we're integrating overall beta, probability of beta. Right? So what's the expected value of this? That means that, if we can make this our update, if we can make effectively this our update, then the expected value of our update is equal to the derivative of the expected value of our performance with respect to our policy parameterization. Do you see? That's sort of cool. Because this seems like sort of a complicated thing, the derivative of your expected value with respect to your policies, other things. It's this big random thing over a very complicated J. Like a general J. We haven't assumed anything about J. We didn't do a linearization. We didn't do anything like that. Right? So J is still however general you want it to be. It's not local anything. The derivative this with respect alpha, we can make this update. We follow that derivative. So that's pretty cool. Right? So the question is, now what does this update look like in practice? This is kind of an ugly term. So what happens when we actually try to do an update like this? Well, we can write this as our update in the direction of the gradient is going to be delta alpha again. Now we want a learning rate because this is again sort of a gradient following thing. So generally, in there, you want some sort of learning rate. And you're just going to do a gradient descent. You get this J of beta, which again now we can write this is alpha plus z. All right. And then something to represent that d ln d alpha P beta. So we'll call that E. And that's called the eligibility. I think that term may come from neural networks. But you can think of it as this eligibility captures how sensitive each parameter should be to an update. So let's say we have 10 alphas. And the eligibility for one of those alphas is very high. That means that, if we do well, that eligibility should go much more in that direction. The eligibility is sort of just capturing the weights of how much you think each parameter is responsible for affecting that output. Right? So a big eligibility on a parameter means that that parameter is going to move a lot. If the eligibility is small, it's not going to move much at all. So does that makes sense? So that's the way to think about the eligibility. All right. So eligibility is just capturing what we expect the significance to be. Yeah? STUDENT: Is it the quantity inside the expected value brackets? Or is it the expected value? JOHN W. ROBERTS: It's this. It's just this quantity. It's not the expected value. We want our update to be this inside the expected value. Then the expected value of our update is the gradient. So it's just the quantity inside. Right. So I think I remember, when I first learned this, Russ was talking about the eligibility. And I had no idea how to interpret it. And if you still are confused about it, I can try to describe it in more detail maybe with a picture or something like that if you'd like. Do people think they have a good idea of what the eligibility means intuitively? OK. Great. STUDENT: [INAUDIBLE]? JOHN W. ROBERTS: Pardon? STUDENT: I know eligibility [INAUDIBLE]?? JOHN W. ROBERTS: I'm going to go through how you calculate it right now. Because E, you see, is going to be this. It's going to depend on what our distribution is. So I said, if we parameterize our policy by the means of a bunch of Gaussians, eligibility is going to be one thing. If you parameterize it as a bunch of Bernoulli variables or something like that, it's going to be different. So it depends on the kind of distribution your policy uses. All right. So-- yeah? STUDENT: If alpha isn't the means of a bunch of Gaussians, then what does alpha plus z mean? JOHN W. ROBERTS: z, alpha plus z is always the action you take. Sort of the-- we actually had some notes on this. If you think about it, let's say that the way you represented your controller, there are some subtle differences here. But yeah. I think it's worth going through. So let's say I parameterize my controller by three numbers. Like gain on position, gain on the speed, and an interval term. So this is a PID controller. Right? So this is something that most of you I think are probably familiar with. So this is a very simple parameterization of a policy. Right? Now, this how we control it. Now the thing is that this is sort of what the actions are. So this would be, a beta would be one of these. An alpha-- and this terminology isn't necessarily standard. I think the alpha and z are. My beta just came up when I was trying to make these notes so I could have a simple term to write down all these expressions. But so this is your beta. This is the thing you're running your system under. Your alpha is the same size as that. But what it is is parameterization for some distribution of how you select these. All right? So in a Gaussian, I mean I give you a simple one. A Bernoulli distribution, this could be-- let's say KP was either one or five, then my alpha one could be the probability of picking one. And then the probability of picking five is one minus alpha one. STUDENT: So in this case, it's a specific evaluation of your stochastic policy? JOHN W. ROBERTS: This is a specific evaluation. This is the parameterization that defines the distribution. So once you tell me alpha, I can tell you the probability of getting any of these. Right? But when I run it, I have these numbers in there. So this is sort of like a sample from a distribution. And this is what I actually get the cost of. Yeah. All right. And so, if you look at this minus this, that's the noise we talk about in the other interpretation. STUDENT: And the system you're trying to solve is not, [INAUDIBLE] how do you decide [INAUDIBLE]?? JOHN W. ROBERTS: Yeah. So we're thinking about this. This is another thing I have a note on. We're going about this in the context of trials. So you run the system under a trial. Right? And then you evaluate the cost. What if your system is just running constantly? Right. Well, you can think about a short period of time. As a trial, you could run it for a while. You can look at discounted cost. And then there's also the average cost or average reward formulation where you can look at how long, what reward you expect to get running it off to infinity. Like, what sort of reward rate. I think about it right now in the context of, let's just say we can do a trial of something. And if you want to worry about what happens if this is constantly running, there are ways you can still do a trial. And then you can keep track of an eligibility trace, which is different than this eligibility. So that's a confusing thing. What you can do is you can be running constantly. You can just sort of keep iterating and take into account the sort of coupling or [INAUDIBLE].. You can imagine it's sort of like, if I study for my test tonight, I don't have a good night. But I do well my test tomorrow. So I was happy. And so, it's sort of like, then my action didn't give me an immediate reward. It gave me a reward later. That's sort of the delayed reward problem, which is very common. And there are ways of dealing with this here where you can still just look at your accruing award right now. And you can still sort of give yourself a reward for doing good things in the past. You can still handle that. So I mean, that is a good question. And there's a number of ways to deal with this. But right now, just think about executing a trial. Right? So then our E here-- again, to make it look like what we want it to look like, to make it-- well, that tells us exactly what our E is. So we can write it as this vector. d d alpha one of ln probability of beta. And this beta is still this vector. But the probability of that is a scalar. Right? So this element of the probability is just a scalar. And that's our entire thing here. Right? So then let's say again, this is where the distribution you choose comes into play. Let's say that we chose our distribution to be a bunch of Gaussians. All the different parameters ID, the noise ID. But we'll just think about as an independent with the same sigma. So the probability of getting a beta, because they're all independent, is the product over all the different parameters. 1 over square root of 2 pi sigma squared. This is going to cramp it if I do that. So I think I have the opposite problem of anyone else. I write way too big. Video people will be happy. So let just me squish it in here. So probability of beta is equal to product over all my different parameters with 1 over-- well, you know the Gaussian distribution. It's like this. Right. So if you give me a beta and I have the alpha, I can tell you the probability of us picking that beta. All right. So our ln, the natural log of this, will then allow you to turn this product into a sum. And you'll get the sum of over i equals 1 to n. Now we can take this derivative. This is a constant. That won't show up. And we'll just have the derivative of-- yeah. Yes. So the derivative, now we'll get rid of this. This one is simple to do now. It's just a product. So the derivative of ln P beta d alpha-- I did this again-- equals. So beta i minus alpha i over sigma squared. Now if you remember how we talked about it over there, it ends up being [INAUDIBLE]. So here is our eligibility. z over sigma squared. Now what does that mean our update is? So hopefully, this looks familiar. My voice is struggling today. I'm going to come in next time talking like this. It'll be great. So this is our update. Does this look familiar? [INTERPOSING VOICES] Yeah. Actually, let me-- yeah. I guess so. Does that look familiar? Is this no, it does not look at all familiar? Or is this yes, it looks so familiar, I don't waste my time and embarrass myself by saying it does? It does. Right? It's the exact same update. Right? Now this one, we don't have a baseline. And we have sigma squared. But that sigma squared is just a scalar. And we already said, baseline or no baseline, whatever you want, it doesn't affect it. Right? Now, is that true for this case? It didn't affect the other one. This one, it also does. If you think about your reward as-- if you think about your reward as one everywhere, then your derivative, let's see. Where is this exactly? OK. All right. So if this is constant, then our derivative expected value is zero. That means that the expected value delta alpha over here has to be zero because it's equal to that expected value. That means the expected value of our eligibility is zero. Right? So the expected value of the eligibility is always zero. So that means that some constant in there is not going to affect it. Right? When we do this expected value, when we put in our baseline, that's going to turn into zero. Because the expected value of E is zero. The exact same thing we had previously. Right? And you can see it's the case for this specific example of z. The expected value of z is zero again. So it's not going to affect the direction of our update and expectation. So yes, you can do anything like this with eligibilities. You can still use a baseline. Right? So this is really the exact same update that we already found. All right. So I think that's pretty cool. Update that we originally interpreted as locally following the gradient is also following the true gradient, not some sort of local but the true gradient of the expected value of the performance of that policy. All right? So if you have some really ugly value function, you throw this distribution on top of it and you move it around. It's going to be some, it's going to follow in the direction of true improvement there. And a little test case. I remember when I was trying to think about interpreting these things back when we were originally doing some of this analysis on the signal to noise ratio, if you think of little cusp, if you have a value function that has a little plateau on it, I can write this on a fixed board because it's not going to stay. If you think you have a value function that's sitting around here, my drawing isn't going to be too clear. But here, we'll do it in 1D. If you have a value function like this, you sit here and follow the local true gradient. It is very small. The other one says that you'll follow that gradient, no matter how big it is. So what it's sort of doing when you have this Gaussian is it smooths out the kinks and stuff like this in your policy. Right? Because you think my local analysis says, OK, well, I'll just get local enough that these little kinks in my value function aren't a problem anymore. But here, we say we didn't say it was local. We said it was global. And so, it's following this gradient, even if we have ugly stuff in our value function. The reason is because this distribution sort of smooths that stuff out. And now it's able to follow it. So that gradient is still well-defined. It takes the stuff and smooths it into something. Does that makes sense? I think it's just the big sort of difference in interpretation right here. There's the one where it's this local analysis and follows the local true gradient but only locally. And then we have the other one which follows even, it doesn't require a small sigma or anything like that anymore. It'll follow the expected value, the true gradient of the expected value of your reward. And it's too big of a mouthful for it to be clear just by saying it. But these are the two different interpretations. All right? Good. STUDENT: That function is basically the policy. JOHN W. ROBERTS: Oh yes, I'll draw this better. So here is sort of our alpha. This is our reward. Here we have our [INAUDIBLE] function. Our policy is taking actions with this probability. All right? That's our policy. So you can see that this-- even though this has a kink in it, as it moves that over a bit, this is varying smoothly. So even though this is a little disconnected like that, this is going to slide over still smoothly. Right? So the gradient is always going to be nicely well defined and everything. And this also goes out to infinity. So even if there's some ugly thing way out there, you're never going to suddenly go and start seeing stuff you never saw before. You always sort of saw it. Right? So you'll be following the sort of smooth one all the time. And that's nice because maybe you won't be following when you get really broad. One interpretation says, well, we're not in the linear regime anymore. So we're not following that local true gradient. This is sort of the issue that brought it up. Let's say this one says it's the linear analysis. And it says, OK. If my sigma is small enough, so I sample close enough so it looks approximately linear, I'm going to follow that gradient. But instead, when it breaks down, you get this really broad Gaussian. So what does that mean? Well, this one is still following that gradient. So that means that you're seeing stuff all over. And it's following that gradient instead of this local one. Right? Now the problem is that, if you follow that gradient, the stochastic policy isn't necessarily actually the optimal one to follow. When you're actually running your policy in the end, certain systems you do. But in many mechanical systems, you don't actually want these random actions. Right? There's the action that actually does minimize cost. And so, the sort of local interpretation has some value because you don't necessarily want to stick with the stochastic policy forever. That's why you sort of reduce your sigma. And you get tighter and tighter and tighter. And you follow the gradient so it's more and more biased to the right spot. Because you can imagine, if you have a value function that's a global min here, and then say this is all extremely high. And this goes down and it's like a low plateau. Now, if you have a really broad Gaussian, it may say, OK, well, even though this is the best spot to execute, I'm going to slide way over here. Because I have a sort of lower expected cost. Right? That put you in the wrong spot. If it was really tight, then I'd say, OK. Sit right here. So do you see that sort of distinction? That this broad guy could push you in the wrong direction if it's really broad. And you may not, in the end, want to execute a stochastic policy for this exact reason. Your stochastic policy could have a much higher expected cost than a small sigma stochastic policy. Down to where, if sigma is zero, that's when you actually have the minimum expected cost. Yeah? STUDENT: So when is this used? It's used when you're starting and you don't know about systems? JOHN W. ROBERTS: When you don't have a model, it's used in the same context as the other one. Are you saying stochastic gradient descent as a whole? Or-- STUDENT: I mean stochastic policies. [INTERPOSING VOICES] JOHN W. ROBERTS: It's used for exploration. Right? Because our other one, where we had this nominal policy and we add a noise, that doesn't look any different, except from this sort of interpretation, than this. Right? So-- STUDENT: So for the [INAUDIBLE] and learning about systems-- JOHN W. ROBERTS: Yeah. It's for exploration. And you need it to get more data. Right? Because if I run the same policy all the time, I don't know how things are sensitive. So my stochastic policy gives me this information to where I can learn. Right? So in the end many times, yes, you'll want a deterministic policy. But for the development of that policy, you'll want to execute all sorts of things that could be suboptimal to give you information. And so, that is the interpretation here. It's for that learning. And if your system is evolving and stuff like that, if it's not constant in time, you may want to always run a stochastic policy. Because you'll be able to constantly learn, if that makes sense. So you may never want to converge. But I think it's-- yes? STUDENT: How do you obtain the parameters of that stochastic policy with that probability? It's very sensitive to sigma, right? JOHN W. ROBERTS: How do I get the parameters of what? STUDENT: Of that distribution. JOHN W. ROBERTS: Distribution? Well, I mean that's what I've set. Right? I said I'm going to act with the probability described by a certain distribution. And I describe the distribution here. So I know these probabilities. Because this, I set. That's from my controller. I designed that myself. What I don't know is this, this value function aspect. STUDENT: So that it should be based on intuition? JOHN W. ROBERTS: No. No. This up here? Oh, how do you decide how broad this? Intuition has something to do with it. Also, you can imagine sampling. I think I mentioned this on Tuesday. You could sample. You can imagine I sample a few points and I'd look at them here. And they look straight to me. And I'm like, OK. Well, what if I sample bigger? And I get these points. And it looks relatively rough. Then I go, OK. Well, I want to be sampling on probably the regime. We're going to see this roughness to some degree. I don't want to be stuck where it's going to take me forever to go anywhere because I'm in the linear regime. So you can do some sampling and get sort of the coarseness of your value function. And so, I mean that's sort of before you run it, you do some of those things. And you figure out how big these parameters are. Because the alternative would be you set it, you see if it how it does, and you fool around with them all the time. That's another way to do it. But this way, you sort of have a more direct process for trying to be like, OK. I want to be sampling where I actually get some interesting information. And so, you want to be sampling where you're probably getting distributions that cover linear-- it's fine if it's linear. But you want to actually sample to where you're going to move around to these different features. You want to sort of be able to get, you want your updates when you change your policy to be on the scale, too. Because it was really small, it could take forever to get down to the minimum. Or if I was over here and I had small sigma, I may never sample over here. So I'd never know to move that way. Right? And so, if you have a bigger sigma, you might be to get out of this local min. So I sample over here and see it's better. And move in that direction. Despite the fact that, locally, it was bad. So there's a number of considerations. But yeah, doing some sampling and stuff like that combined with some intuition. That's probably the best I can give you as to how to set these parameters. I mean, that's a tricky thing. That's something that's nice about SNOPT. There's no eta to set. This one also has a stigma to set. So please, Russ emphasized that I was to make all of you understand this stuff really well. That's why I had two days. And if you do it on your project and you apply it at the wrong system, I'm going to be blamed. [LAUGHTER] Yeah. So if you don't have a model and you don't have anything like that, try that back prop first if you can. Try SNOPT first if you can, I think. If you can't, this maybe won't do as-- this can solve the problem, too, I guess. But sort of appreciate its limitations. STUDENT: So with the stochastic gradient and stochastic policy, you also have to be careful about how you parameterize everything. Right? JOHN W. ROBERTS: Oh, very much. STUDENT: Because, I mean, you are taking the expectation over these betas, which are in fact instantiations of your parameters. JOHN W. ROBERTS: That is a very good point. STUDENT: Are you still having that problem regardless of which two you use? JOHN W. ROBERTS: What do you mean, which two? Oh, whether you're using back prop or this or something like that? STUDENT: Well, no. Between the stochastic policy and the stochastic gradient descent. JOHN W. ROBERTS: OK. I really want to emphasize something right now. They're not different. OK? They're not, they're the same thing. All right? You're correct that your parameterization is going to affect things. Because you're going to get to a minimum, with respect to your parameterization. STUDENT: Right. JOHN W. ROBERTS: You're going to be sampling over your parameters. And so, the cost function is going to be a function of your parameters. So if you imagine, if you have different parameters, the cost function will look completely different. Does that makes sense? If I were to parameterize with an open loop tape or feedback policy, how would the value function dependent on those parameters be completely different? So it's very important. That's true. It's over these instantiated parameters. So picking that affects everything. And picking a sensible one can be a hard problem. STUDENT: So there is almost like another, instead of just having the sigma and the rate, you also have to have essentially these alphas and [INAUDIBLE]. JOHN W. ROBERTS: You have it for both, right? STUDENT: Right. Exactly. It's like-- JOHN W. ROBERTS: Yeah. STUDENT: It's almost you're over-parameterizing everything just so you can get an answer. JOHN W. ROBERTS: Well, I mean you need to parameterize your policy either way. I mean, whether you choose to parameterize it as an open loop tape where you have 250 parameters or something like that, you've still chosen parameterization. I mean, you still picked a lot of them. But you need to represent it some way. And so, I mean, whether you do back prop or anything, you're still going to have that exact same problem. I mean, you could do back prop and make it more efficient if you had five parameters, too. But it also can handle these big open loop tapes and stuff like that. So there's less of a pressure to find concise and sort of compact representations. But yeah. But I want to emphasize that these are different ways of looking at it. But update's the same. The algorithm's the same. The only difference is, am I following this local true gradient? Am I following the expected value of my cost? So a toy example here to show some of the practical issues of some of these things. And please, if you have any more questions about any of these things, even if it seems tangential, just let me know. OK. So here, we have what we just talked about implemented. You can see how simple it is right here. I'm doing a four loop instead of that while converged. But we have a loop through some number of iterations. I get my sigma and eta. So this is so I can make it dependent on the number of iterations or something like that. So I can make them decrease. I generate the noise according to a Gaussian distribution here. I get my reward. And the rest of this is just sort of plotting stuff and everything like that. And then here. Here's my update. And that's where I've got my baseline. Right? So you can see that this is just sort of that pseudocode right over there. There's nothing more. All these functions are just so that I can switch between different things really quickly. But it's very easy to implement. STUDENT: So you do the baseline. And then the equations, [INAUDIBLE]?? JOHN W. ROBERTS: Right. I mean, this is what pops out when you look at that expectation. But as we said, the expected value of E is 0. So if I have a constant in there, my expected value if I subtract off that constant the same way, it's not going to affect the expected direction of my update at all. So these say you don't need a baseline. But a baseline has performance benefits. Right? So that's sort of another, if you go these different directions, this one says, oh you don't need a baseline. You can put it and it helps. The other one's more natural to think about starting with a baseline. Then you can say, oh well, you actually don't need it. So again, they both have the same effect there. Maybe I should have been more clear. But yeah. So the problem that we're looking at is simply, we have a spline. So I make a spline. And then we're trying to figure out a curve that gets as close to that spline as possible. So it's really just sort of we're trying to fit a curve to this random spline. Right? Now, obviously we don't need to do anything fancy for this. But it's very visual. And so, I think it's probably a good example here. So we'll start. The cost function we're using is the square distance between our sort of guessed curve and the actual curve. And try running it. All right. So on the left, this is our cost. Well, we have it formulated as our reward here. So it's negative. And then over here, the blue thing is the thing we want. The red is our nominal. And the green is our noise. You see our noise is pretty small. I started it at L zero. You can see it climbing here. Right? And it is getting better. It's bouncing around a bit. Yeah. And so, this is actually parameterized, this spline. So it could be exactly correct if we let it. I mean, it's capable of representing the curve exactly. So the cost could be zero, effectively. Right? I think actually, if I have run it, it gets stuck in a local minimum, which is a good cautionary tale. Even if a problem is simple like this, you can still get stuck in local minima. So always be careful for that. But see, it's actually doing a reasonably good job. Right? Now you might be like, oh well, it has an ideal parameterization. It's the minimum number of parameters you need to have a spline that can represent it. And this was the same form of spline. So that's a pretty solid parameterization. But we can try a different one. This computer is showing its age. When I ran these tests on my desktop, it went pfft. And so, I had my number of iterations, like several thousand. And then when I realized it was going to do this, I was like oh. All right. So here. What parameterization do you want? Linear, tripolated, nearest neighbor? A polynomial? All right. OK. [INAUDIBLE] There we go. Sorry about this. It's not done. Here we go. Here, you see it's capable of learning. It has this is obviously inferior parameterization. It can't represent the right answer. But you can see it is getting closer here. It's not doing a bad job. Right? And the question of what the optimum is here is not going to look just like the curve because it has to sort of balance these competing things of going over and going under. So this is a good example of where your policy parameterization doesn't capture the true optimum. And so, when this thing converges to something, it maybe is the best. But it's only the best with respect to that way of parameterizing a policy. Right? So if you guessed that nearest neighbor is rich enough, then you put it in here. You're going to see that it's not optimal. Right? That you've limited yourself. And sometimes I mean, the other one with the spline, you put yourself in the regime where you have a very good parameterization. You get very close to the right answer. Here, we have poor parameterization. And so, it's still going to learn. It's still going to converge. But it's not going to be as good. So lin interp, we could probably do better. But here. Now, we'll go back to-- well, we'll keep it in nearest neighbor for now. You can see lin interp. All right. Now I said previously that this is partially nice because it's robust to noise. So here we're going to look at, we're going to add Gaussian noise with a standard distribution of 20. Let's see. Here, it's not doing well. I probably have to reduce my sigma eta. Let's see if we can get back to where it started. Yeah. STUDENT: I think that's the same thing as [INAUDIBLE].. JOHN W. ROBERTS: You multiply by sigma. The variance is squared. So this is not doing a great job. You see noise can break it. If we start with a bigger error though, it's effectively standard deviation. Also maybe our eta could have been too big. Because we're getting big measurements just from this error. So like a big eta, let's try reducing our eta. So it didn't get worse. Learning is going to be slow though, obviously. Because it has such a small signal. But at least we prevented it from doing that terrible thing. That's because it was getting these giant measurements of error and updating based on them. But an expected value will still go where you want it to go. If you did a bunch of updates, you'd be fine. But that just gives you everything out. If we reduce that down to, let's say, learning will be really slow. So let me turn off plotting. I'm sorry about this. I don't know what just happened. OK. So let's see how this does. Ah, this is smoothed. I ran a boxcar filter on it. But it's not necessarily learning quickly. But with that noise, it still learns. Now if we're going to be doing this, we might as well kick ourselves back to 20 and see if we can do anything. Yeah. So there, it's got enough noise that it's sort of been walking a bit much. Maybe. If you make your eta smaller and stuff like that, maybe make your sigma bigger so it gets a bigger signal when it measures the change, you might be able to be OK. But I mean, it's still suffered from these things definitely. Now, another point here. I talked about cost functions a bit yesterday. Here, we have one where it's the square distance. Let's say we-- I think I'm going to give away whether it's going to do better or worse by having named it poor grad. But this one, we're going to get a reward of one for each point that's within of 0.05 of the desired point. So if you're far away at zero and if you're close enough, it's one. Right? STUDENT: [INAUDIBLE]. JOHN W. ROBERTS: Right. Where did my-- Oh. So you see? This is a much shallower performance. It still looks like it's learning. And it is. Because I mean, even the bad cost functions can learn. But now I'm going to show you what that looks like when you run it and actually look at the plotting. You see? Now we can give it our-- we're still giving it that linear interp. Let's see if we give it the better parameterization and see how that does. You can see it's not learning nearly as quickly as the other cost function did. Right? Let's get all our things back to make eta a bit bigger. See? So here the only thing that changed between our runs that looked pretty nice and this run right now is the cost function. You see, it is over the course of-- that's a 300 something, it is getting up there-- but if you remember the other one, it's fitting a lot nicer. Right? And the thing is that, if you look at probably the cost of the other one, even using this cost function, once you solve it, the other one would be lower as well. Do you see what I mean? The problem that, if you solved it with the other one then evaluated this cost function on the solution, that would be better than this after some number of parameters. And it's because learning based on this is hard. Right? You can see that this is very coarse. It's very easy just to lose one or gain one if you don't have good gradient information. And so, this is a bad cost function for learning. Right? The same task, if this were the task you really cared about, formulating it with the other cost function and solving it, you actually probably do better than solving with this directly. Right? And that's because the gradients in this one are poor. It's not easy for it to define the direction it's supposed to move in. And it's even worse. I was talking before about how you could be in regions where you're getting zero cost or zero reward for a while or there's no gradient at all. So if we start ourselves with an error here, it's not getting any measurement. You see? Again, it's not changing at all because it can't measure anything. So if we go to-- I said the way to solve that, if you were stuck with it, could be to use more violent kind of sampling. And there, you see we actually are able to get some reward. And it takes a while. But you see that there's more violent sampling at least getting out of that region where we don't get information. Right? Still not the best. But if you're stuck with that problem, that's one way to deal with it. But you can see that, with the other one where it doesn't have any regions like that-- if you go back to our nice grad-- sorry, I need to switch my eta. Yeah. Whoops. We can go back to-- See how much nicer that is? So there you go. So here you can see the parameterization can affect it a lot. The cost function can affect it a lot. The eta and sigma can affect it a lot. There's a lot of concerns. Right? And that's why that should make SNOPT and stuff more appealing to you. Because if you can complete those gradients using back prop, there's no guesswork in that. Right? You give it to SNOPT, there's no parameters to set. I mean, I guess there's some convergence criteria and stuff. But you don't have to play these games. There's a lot of ways to get this wrong. Right? So-- STUDENT: So the reason that we introduce the [INAUDIBLE] policy was a way to explore [INAUDIBLE]?? JOHN W. ROBERTS: For here, I mean they provide you the exploration. Yeah. There's certain situations-- we talk about gambling and stuff like that-- where stochasticity, if there's an adversarial kind of component, stochasticity could be necessary for optimality sometimes. Right? But in these kind of cases where we have a dynamical system, I don't see how stochasticity in a deterministic system could make you more optimal than deterministic. Right? That's because the system is deterministic and not trying to guess what you're doing and stuff. So that's pretty nice convergence right there. And you see, if we evaluated this solution with our cost function of one of the poor gradients, I mean this is going to be a lot nicer. Right? And that's even though we wouldn't solve it explicitly. It's just because the gradients of this one are a lot nicer. So you can solve it a lot nicer. So remember that, that cost functions-- even if it's not explicitly what you maybe most care about-- for example, we care about catching the perch. Right? But if you want to catch the perch, it could be that, by learning with something with a distance in it, maybe we'd learn better and actually end up catching the perch more than if our reward was just one that could catch the perch. Right? So cost function design can matter a lot. Policy parameterization matters a lot. So hopefully you all feel really comfortable using stochastic gradient descent now in the situations where it's warranted. STUDENT: Can you talk a little bit about the baseline? How do you get the baseline? JOHN W. ROBERTS: Yeah. My baseline here. I think I have two limitations, let's see something we can do. So my baseline here, right now I'm using a second evaluation. Right? We can do an average baseline. I screwed something up. Yeah. I probably-- so that's gone crazy. But we'll try a smaller eta. There. So this is an average baseline. See, it's still improving. It's not doing as nice. And it's also because we have to turn down the eta and stuff. I don't know that error was. I'm not going to try to debug it right now. But you can see it's still learning. Now the real question is, what if we have no baseline? Right? I made a really smart eta there because now it's not getting rid of anything. Yeah. You don't really have to turn it down even more. But that's a good example of why a good baseline is so critical. Right? I mean, you can see that all we did was change how the baseline, the average baseline was working fine. And it didn't cost us any more. You have two extra evaluations that we just got rid of. We put in that. And you see how much it struggled there. So I mean, if you just do an average baseline, you get it for free. And it's still a huge advantage over what we're trying to do without any baseline at all. STUDENT: But we did-- I mean, it will eventually work it out. The baseline would just be a lot slower? JOHN W. ROBERTS: Yeah. I mean, it should. I mean, the thing is that, in practice, it can randomly walk around and get stuck in places. Like right here, you see this is that. It's a big run of improvement there. And it did better than it originally did. You see it's just walking all around. And so, yes. I mean there's still this bias towards improvement. And it will work. And if you were to calculate a whole bunch of delta alphas and then add them all together, then you probably wouldn't walk in the wrong direction as often. But here's the thing. You have to remember that it's sort of always moving in the direction. Right? It's just it's moving more or less, depending on how good it is. And so, yes. This should still work. But in practice, a good baseline helps a lot, as you've seen. So hopefully, all those examples-- STUDENT: Is it also the case that that baseline will not hurt you? JOHN W. ROBERTS: Well, how bad? I mean, this could be considered bad. Right? A zero baseline did that. [INTERPOSING VOICES] STUDENT: Or can you actually have a baseline that gives you worse results than no baseline? JOHN W. ROBERTS: You probably could. Because you could just have it be a bigger error. Right? Just more in the wrong direction. STUDENT: Right. JOHN W. ROBERTS: So Yeah. But I mean that's what I'm saying. If you average over the past several trials like we did and then reduce the rate of your learning so that those averages are relatively representative and you don't move too much on your own-- STUDENT: [INAUDIBLE]. JOHN W. ROBERTS: Yeah. But I'm saying yeah. I mean, an average baseline is free. There's no extra policy evaluations. And it still helps a lot. So an average baseline makes sense. And I mean, in something like this where it's so cheap, if you want those big jumps and stuff like that, the average baseline is if the big jump can struggle. Because the cost is so different when you've updated. Right? So your average is going to make a lot of sense. But so they're sometimes going to be better rather than reducing your updates. Like when we're really far away from the right policy and we have a long way to move and we could move in all sorts of ways, that'd still be good. The average baseline, if we turn everything down to the average baseline, and it still works nicely, that probably wouldn't be as fast as doing two policy evaluations and doing big jumps. Right? Where every time, you just re-evaluate it. STUDENT: It just resets it. JOHN W. ROBERTS: And so, yeah. To move it smoothly with an average baseline, we may need more than twice as many iterations as the one does when you do an ideal baseline. That's something that I've seen in practices. When you're doing this big updates, a real evaluation can be fewer evaluations in the end. So anything else or anything else you'd like to see run on the spline fitting problem right here? OK. No one has any questions? We're all good? I'm going to tell Russ that you understand stochastic gradient descent perfectly. And it's going to be extremely useful in your research in the problems to which it is well suited. All right. Great. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_20_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: OK. So welcome back. We talked last time about the problem of policy evaluation, which was, given I'm executing some policy pi, estimate the cost to go, right? And we showed that it was sort of trivial to do if you have a model. And if you don't have a model, you can still do it with the temporal difference learning class of algorithms, which is TD, which is in the title there, OK? And the temporal difference learning, the TD, lambda, in particular, was nice because it encapsulated all the algorithms we talked about last time. TD, if lambda is 0 was essentially the one-step bootstrapping idea, we said, where you use your current cost plus your expected value of all future costs as your new estimate of this. And the other limiting case was TD1, which resulted in just Monte Carlo evaluation. And what I said quickly at the end of class is that-- we spent all our time last time on doing temporal distance learning just on the representation of solving a Markov chain, right? We did it for discrete state, discrete action, discrete time. And it's known that these algorithms converge to the correct estimate if you run enough times and your Markov chain is ergodic, if you visit all states at all times, OK? But that's pretty limiting because in order to do a Markov chain analysis of even an acrobot or something, you have to democratize four state variables-- theta, theta dot, theta 1, theta 2, theta 1 dot, theta 2 dot. So you do 25-- I mean, things get big fast. It'd be the 25 to the 4th power if you put 25 bins in each dimension, and that's not very many. So today, we want to make the tools more relevant, more powerful, by breaking this assumption that we're in a Markov chain, where we've discretized everything, and try to do more general-- try to do temporal difference learning on a more general class of functions that are more natively continuous, OK? So we're going to do it with function approximation. Now, to get there-- John did a little bit of function approximation in the reinforce lectures, so I want to basically pick up on the kind of examples he was showing you, and do a quick crash course on least squares function approximation, just to make sure that people are comfortable with that, and we'll build on that quickly. OK, so let's talk a little bit about least squares function approximation. OK, so the canonical example, which is the same thing that John showed you in class, is we want to approximate, or we want to estimate, some unknown function that takes input x and spits out output y. And you're given input/output data, which we can write something like-- you're given pairs of data, samples of the input and output. So you're allowed to query the function. Given this input, what output should you do? In the basic case, you're doing this passively. Someone just gives you a data set, and you're supposed to then do your best job at reconstructing F, OK? There are interesting cases that people look at where, if you're allowed to choose this, then how do you actively interrogate the system? How do you pick the x's to get the most information output? In the simple case, let's just say you're given a collection of input/output data, and you want to estimate F. So the standard way to do this is to write down-- there are many cost functions you could use, but the one most people use is to write down a least squares cost function, where we're going to try to find a model where y hat is some F of, let's say, it depends on parameter vector alpha, hat of x. Just like in the policy gradient kind of case, I'm going to write down a set of functions where the actual function depends on both alpha and x, but we can think of it as taking x's input and generating an estimate y hat, OK? And you can formulate the problem with the least squares metric, where you're going to minimize, over alpha, the squared error over my data minus my estimator. I actually have the estimator here. No big deal, but just to keep it consistent with my chicken scratch notes. OK? So if we can write down a class of functions we want to search over, then we can turn it into the standard optimization we've been doing throughout, where we just try to find the parameter vector alpha, which makes our estimates as close as possible, in the least squares sense, to the actual data, yeah? Or y hat here is F hat alpha xi. OK, so why do we care about doing that? The methods used-- you already know optimization methods. We know a lot of ways that we could potentially try to solve this. So for instance, we could take the gradient of this with respect to alpha and do gradient descent, which is exactly what John was doing in the reinforce case, except the gradient was not calculated explicitly. It was estimated by sampling in the reinforce case, right? And in general, there are some cases-- there are some classes of functions that we can choose, and I'll make this more graphical and explicit in a second, where you can just solve it analytically, right? Same way in the optimal control derivations, there were some u's that we could just derive analytically and some that we had to do gradient descent for. So let's dig in now and get more specific and say, what kind of models do we want to use as candidates for these functions we're trying to fit? How would we write down an F hat, which depends on both alpha and x? The literature is just filled with people's different models that they do this. One of the most popular ones of, let's say, 20 years ago, which actually are still popular today, and we call them function classes. One of them is neural networks. So a lot of people believe that a good way to write down an arbitrary function is to take your inputs and try to do something that models a neuron. It adds things up. Adds up the different inputs, and then goes through a sigmoidal function and gives you an output y. Maybe just because that's what the brain sort of does, maybe because there's a lot of success stories from it, but a lot of people do this, right? And moreover, this is sort of the single layer neural network if you have lots of different input functions. And they potentially add up to y, for instance. And the parameters now are the weights of this connection. So your function might be some squashing function, this nonlinear function, times a weighted combination of the inputs, potentially plus some baseline or something like this, where this is tanh or some function like that that does a sigmoid. OK. And if you do this, then you can stack things up and make multiple-layer neural networks. And people believe, and people certainly in the '80s very strongly believed, that this is a representative class of functions that maybe looked a little bit like what was going on in the brain. And people know that if I have-- it's a general function approximator. If I have enough of these neurons and enough layers, then I can represent any function arbitrarily finally. So it's kind of a cool result state. Using this thing that looks like the brain, if I stack up these elements, then I can represent any function with enough neurons. And if I want to solve a least squares optimization problem to make the input and output of this neural network work the same, then I can just solve this optimization problem with gradient descent, and that's roughly what everybody did in the '80s. I guess that's not quite fair, but-- right? And it works, actually. People still use these today to do face recognizers, right? They use it to-- Yann LeCun down at NYU has got this text recognizer that they use actually to scan-- to read checks at the bank, right? That's still neural network-based, OK? A lot of people got a lot of things to work with these neural networks. Gerry Tesaro, in the optimal control sense, got the TD gammon-- the reinforcement learning system to play gammon, where the board configurations were put in as inputs to the neural network, and the output was what move you should take, and he got that to work, right? And actually, there was a value function that came out. He would even explicitly estimate the value function. But today, this is a strawman because I don't like neural networks. They're not a particularly nice least squares thing to optimize. We can do better these days. I just wanted to put it up there is an important element of the class, but not one we'll tend to use, OK? If you care about least squares function approximation, there are a lot of other choices you can have for your function f, which you can parameterized by alpha, which maps x to y. A lot of choices, OK? The ones that people in reinforcement learning tend to use most effectively are the linear function approximators. Linear here, meaning that they're linear in the parameters, but not necessarily linear in the outputs. So I want yi hat to be some linear combination of potentially non-linear basis functions of xi. OK. So the neural networks had the problem that they were rich because they have nonlinearities in them, and if you cascade nonlinearities, you can get arbitrarily rich things. But the parameters that change the function are buried deep inside the nonlinearities, OK? It turns out, if you come up with policy and function classes that can represent your problem nicely, where the parameters are all at the output, then life gets a lot better, OK? Why does life get a lot better? Well, because now I can take this least squares metric and solve it explicitly for alpha, OK? There's two things. I can represent arbitrary non-linear functions if I have the right basis set. Also, compute alpha explicitly. Don't have to do gradient descent. I can just find the optimum. Just to see how that goes-- you can probably see it, but it's so simple, we'll write it out here. If I want to minimize that squared error, I minimize over alpha, sum over i of yi minus alpha transpose. I'll write the vector form of that v of xi squared. I could write this even more vector form if I choose big Y to be y-- if I put these things into a big vector, and big phi is in the other direction, then I can write this guy as min over alpha y minus phi alpha. If you take the gradient with respect to alpha, what do you get? You get negative phi transpose y minus phi alpha equals 0. Assuming that this thing is well-behaved, then you can just say alpha is-- very straightforward least squares estimation with linear approximators, but let me now convince you just how rich that class of functions is, OK? OK. Here's the game. There's some function that exists. This is my actual f, OK? I don't know it. My algorithm that I'm about to do doesn't know it, but that's the original f I'm trying to estimate, OK? And let's say I don't get to experience f, but I get to sample from f, and every time I sample from f, I get some noise, right? Then maybe my input/output data might look something like this. This is just-- I took a bunch of random x's, I evaluated what that function was, and I added a little bit of random noise to it, OK? Now the question is, if I just have those red data points as input/output data, can I come up with an estimate in f hat, which reproduces the original f? And that's not many data points, right? OK. I could do it with a neural network, but without telling you all the details, it's not quite as elegant. We did it with reinforce. It took a little bit of convergence. If we do it with a linear function approximator, we can do it just in one shot, just like that. The first trick, though, is we have to pick our basis set, phi, OK? You've got lots of choices with phi. Some of the common ones-- let me do-- one common one is radial basis functions, where you assume phi of x is just some Gaussian, right? phi i of x is just a Gaussian function. The normalization doesn't matter. x minus mi squared to sigma i. It's just a collection of Gaussians where the means are different for every basis function, OK? The variances could be different too. That doesn't matter. So that might look like this. If I made 10 different basis functions for this problem, and I centered them evenly across the space that I sampled for, then that would look like a reasonable basis set for function approximation. OK. And then the question is, can I take a linear combination of that 10 Gaussians and turn it into a pretty good estimate of the original function just from looking at the red points? OK? If I plug this equation in, then I can get a weighting on each of those individual phis. Alpha i is the weight on phi i, right? I'll do that with another click here. And it turns out it said, in order to represent this function, that one that was centered around 0 had better have some negative value. The one that was centered around 1 has got a pretty big positive value. This one's got a pretty big positive value and so on. So you can see all the same Gaussians are there. They're just weighted by a different amount, OK? And if you sum all those up to estimate y, then what do you get? Pretty good, right? It's pretty sparse set of data points. Pretty sparse set of basis functions gets a really nice accurate-- in one shot. No gradient descent or anything. This is consistent with what John and I have been saying, don't do reinforce unless you have to. Because if you know the function and you can sample from the function, you can just explicitly get it. OK. The barycentric interpolators that we talked about before-- remember, we interpolated between elements of the grid with barycentric interpolators. Those are linear function approximators, where it turns out, you can think of that as having nonlinear basis functions, which have something like this, the whole thing rectified. Essentially, if I plot it in 2D here, then-- if I turn my radial basis functions off and do my barycentric interpolators, and I run that same demo, the barycentric interpolators are going to look like that, OK? If I want to linearly interpolate between all these neighboring points, one way to view it is I take my distance between the points, I interpolate. Another way to view that is actually, those are basis functions that look like tents. And the same thing is true in 2D, they're 2D tents, OK? It's the opposite way to think about it, but those barycentric interpolators we've been using the whole time are exactly linear basis functions. And again, I can sum these guys up, and I can make an approximation of the original non-linear function. This one, of course, has got to be piecewise linear, but that's OK. It did a pretty good job, considering it's piecewise linear and it's not a piecewise linear thing it's estimating. OK, you could do basis functions based on Fourier decompositions, for instance. You could do polynomials basis functions, where phi i of x is x to the i minus 1, let's say. Something like that. All these things work. I'll do the Fourier one here. Same function, noisy data. There's a Fourier basis over-- spatial Fourier basis. I can add those up. I get very large coefficients. These tend to cancel themselves out a lot, but still, I get a pretty nice representation. OK. So this idea of using linear function approximators and then linear least square-- exact least squares solution is a very powerful idea. You can represent very complicated functions potentially with this. This was not the way people tended to do things 15, 20 years ago. It really tends to be the way people do things now. In fact, machine learning got on this kick in statistical learning theory. People talk about kernel methods, and you might know these. I mean, the essential idea is there's two problems. I didn't say it, but sometimes people are trying to estimate scalar-- continuous-valued outputs. Sometimes, people are trying to do Boolean outputs. They would call that a classification problem. There's different forms of this problem. But in machine learning, people realized that you can often take a fairly low-dimensional problem, blow it up into a very high-dimensional space using lots of basis functions, for instance, lots of kernel functions, and then do linear least squares in the high-dimensional space, and that works a lot better than doing some sort of nonlinear gradient descent-based estimation in the low-dimensional space, OK? So that's really a trend that happened in machine learning. In reinforcement learning, it's the norm because essentially, when we have linear function approximators, we have some proofs that our optimal control algorithms are going to converge when we have almost nothing. In fact, in the more nonlinear function approximator case, we have lots of examples where things like TD on a function approximator just don't converge. There are simple examples where they do exactly the wrong thing. Good. So just to convince you of one more basis function that is more relevant to this class-- that was just the crash course for people who haven't seen function approximators-- let me give you another example from system identification. Let's do nonlinear system identification as a least squares problem with linear function approximation. So let's say we've got our equations of motion, which come from this. It's been a little while since I wrote these equations. Jeez. Let's say it's for the pendulum or for the acrobot, you name it. And let's say we know the form of the equation, but we don't know the parameters. We don't know how much the masses are, how long the link lengths are. Normally you can measure those. But inertias, things like that, these can be harder to get right. So we can, for instance, run a bunch of trials with some dumb open loop controller. Just pick you randomly, let's say, and make the acrobot flail around a little bit and collect data that looked like this, right? q, q dot, even q double dot, and u at every instant in time. And this is going to be exactly like our input/output data in our least squares formulation, OK? And here's the very amazing observation. This one really surprised me when I first got it. The manipulator equations for random robot arms, they're actually linear in their parameters. Very non-linear functions. They do very complicated dynamics. But they tend to be, for most robot manipulators-- robotic arms on the factory floor, walking robots, things like that-- those equations actually turn out to be linear in the parameters. They're not linear in q, but they're linear in the parameters. All right. Take the simple pendulum. I can rewrite this dynamics as alpha transpose times theta double dot, theta dot, sine of theta equals u, where alpha is i, b, and mgl. OK, after you think about it a little bit, it turns out to be not all that surprising. In all our robot manipulators, you see sine thetas everywhere. You see sine squared thetas. You see sine cosines. You see all these things. You never see sine l theta or something like that, OK? It turns out, in these problems, that the parameters don't end up inside your nonlinearities. Yeah. AUDIENCE: Isn't it still nonlinear because the parameters are multiplied together? RUSS TEDRAKE: OK, good. So it's not linear in every individual parameter, but I can get-- I can re-estimate this as a linear optimization. So that's exactly right. So sometimes, you have to settle for groups of parameters. But those groups of parameters are always enough to rewrite your initial dynamics, OK? OK, so that actually makes it sound like sys ID is easy. If we have a complicated robot-- yeah, says Michael, who's been doing it for the last three months, six months, maybe. I don't know. (LAUGHS) You just shot daggers at me there. Yeah, so sys ID should be easy for robots that are well-behaved. It turns out, if you have saturations in your motor and stuff like that, it gets more complicated. Michael could tell you all about it. But if I have a simulation of a pendulum, let's say, then it should be trivial, and it is trivial. So let me just show you that. It turns out, it's exactly the same linear function approximation. This is my basis functions, right? These are my phi's. I've got three basis functions. One is theta double dot, one is theta dot, one is sine theta. These are my coefficients, alpha. And how am I going to do it here? Where is it? Sys ID, yeah. Just a few lines there of the Matlab code, and that includes running the tests, OK? So most of this is just setting up my simulator. I'm going to pick some small random actions, a random tape of torques to play out. I'm going to pick some random initial condition. I'm going to run my control system with just making that tape-- a zero-order hold on that tape, OK? And I'm going to collect the time x torque and x dot that came out of that simulation, and do exactly the math I showed you over here, where alpha is i, b, mgl-- in this case, it's the location of the center of mass-- and do my optimization like that and watch what happens. What's it called? OK, so I started from random initial conditions. I play a small random tape for 10 seconds out, OK? The actual parameters that I started with were this. The ones that are estimated after 10 seconds of running my simulation and doing least squares are that. It's pretty good, right? And that was corrupted by noise. Not a lot of noise, I guess, but noise. It's easy. System identification's easy, right? It's actually a very, very powerful observation that you can do system identification for these really hard systems just as a linear function approximation task. One shot. That's what makes adaptive control tick on a lot of these manipulators, right? It's a very fundamental observation. AUDIENCE: What would happen in the case you assigned theta to a theta? RUSS TEDRAKE: OK, it's not going to work as well, but let's-- it's running some random initial tape here. It's not catastrophically bad, actually. AUDIENCE: We have this small angle. RUSS TEDRAKE: Exactly. That one happened to be a small angle, right? Let's see if I get-- come on. That's a bigger one. That's worse, yeah? There's another way I can break it, just to anticipate here. Let me put it back before I forget. This was sine here, right? What if I don't put enough control torque in, OK? I put a note to myself, if I make a 0.1 here, then suddenly, I'm not putting in very much control torque. And why could that be a problem? AUDIENCE: Unprotected. RUSS TEDRAKE: What's that? AUDIENCE: You're unprotected. [INAUDIBLE] RUSS TEDRAKE: Yeah, I'm sort of not-- exactly. I'm not simulating the system beyond the noise that I've added, and that can break things. Let's see. So now it's pretty much just falling almost as a passive pendulum, and that breaks things more. Although this is a pretty easy problem. That doesn't horribly break anything. OK, that same code could have run for the acrobot, right? It couldn't have run for one of our airplanes because the aerodynamics tends to not be linear in the parameters. But rigid body dynamics tend to be linear in the parameters, right? Doing it on the acrobot's a little bit harder because you have to be more clever in stimulating all the dynamics with your one actuator. So there are a lot of good problems left in system identification. Designing sufficiently rich inputs to stimulate all your dynamics is one of the big ones. But function approximation and least squares is basically what you need to do to do system identification. AUDIENCE: So if you have, for example, [INAUDIBLE]---- RUSS TEDRAKE: Yeah. AUDIENCE: And then presumably, you have sine theta minus as one of the parts in your equations. So if you want to get something good, you, should put that as one of the features, right? You have sine alpha minus gamma there, and then [INAUDIBLE].. RUSS TEDRAKE: There's always a step where you have to look at your equations and pull out the proper basis function to describe that class of systems. Absolutely. So if there's a sine theta 1 minus theta 2 floating around in there, it should be in one of your basis-- as one of your basis elements. AUDIENCE: If you want to do this with this system but you don't have the knowledge, you think that it's linear, but you don't know the equation for it. RUSS TEDRAKE: Good. AUDIENCE: Then? RUSS TEDRAKE: Then maybe you should do radial basis functions or polynomials or some other basis set. I think a more common thing would be, imagine there's something I'm not modeling well in my pendulum. Let's say there's some nonlinear friction in the joints or something like that. A common thing that people would do would be to add in here some slop terms-- let's say radial basis functions or something, just in my standard linear function approximator-- and do that, OK? And then, now you've just added more representational power to this-- you've given the basis functions which you believe to be true, but you also add in some slop terms, right? And this is-- I mean, so Slotine certainly teaches this and does this in his robots, right? For his adaptive controller, he puts those in to capture the terms that he didn't expect to show up. Yeah. AUDIENCE: How well does this work for things that aren't really smooth. Sometimes stick slope can-- it seems like if you plot it, it's not really a continuous function. RUSS TEDRAKE: Continuous doesn't matter, right? If you were trying to fit a continuous basis set to a discontinuous function, then it'll only do as well as it can in the least square sense. But you can represent arbitrary static functions of-- if it's a function of x, then it should be right. I think the more common problem, maybe in a stick slope kind of model, is that there's hidden state-- AUDIENCE: OK, yeah. RUSS TEDRAKE: --for instance, right? Maybe you don't know exactly what the state of the system is because there's some other-- and then you'd have to estimate that to put it into your basis set. OK, people feel OK with least squares estimation? Yeah? Good. Now let's see if we can put it to use to do what we promised at the beginning, which is temporal difference learning. It's gone, isn't it? OK. So if you remember, and I hope the impression came across when I was talking about temporal difference learning, that we made a pretty complicated update, which was this weighted sum of one step, two step, three step, end step returns through some algebraic trick, and that's really probably the right way to think about it. Through some algebraic trick, you could do it with a very simple algorithm. So let me just remind you that that algorithm was-- OK. So if you remember, the picture we had last time was that I've got some Markov chain that I'm moving around. As I'm walking around this Markov chain, I'm trying to-- every time I visit a state, I want to update my estimate of the costs to go from being in that state, right? And I can do it with this very simple algorithm which keeps a decaying eligibility trace on each node. Every time I visit this node, it goes up. And then it starts decaying until the next time I visit and it goes up. The rate at which it decays is given by the discount factor in my optimal control problem and the lambda, which is the amount that I want to weight Monte Carlo-- long-term evaluations-- versus short-term evaluations, right? If lambda is 1, I'm going to let that settle much more slowly, and it's going to be making very long-term updates, and if lambda is 0, then it's just going to make an update based on the one-step prediction, OK? If I carry around this eligibility trace, this would be ei as a function of time for every i, and I do this very simple update, then there's this interpretation that j hat is doing something between Monte Carlo and one-step bootstrapping, depending on what lambda you pick. Right? OK, so let's say we don't have a discrete state and action system, but we now have, let's say our barycentric interpolator is on a pendulum or something like that, right? And let's say I take a trajectory through here, which I-- that was a bad trajectory for a phase space, but I take some trajectory through my state space, OK? Let's say I'm willing to discretize the system in time still. And let's say my value estimate is a linear function approximator here, which is this barycentric grid, OK? So at every point here, I'm going to say j hat is just a weighted combination of the four neighboring points weighted by the distance, right? Like I said, that actually is exactly of the form where I've got a scalar output, and I've just got-- you can think of this as being a tent, this being a basis function which has sort of a tent of region of applicability right here, and added with this one that has a tent of applicability here. And at every point, there's a-- at every cross-hair, there's a basis function that looks like a tent. Did you get that out of the previous one-- the previous picture? Linearly interpolating between those four points is the same as saying I've got four basis functions that are active, and each of them contributes in a way that diminishes from their point of attack. OK. So now I've got a linear function approximator trying to estimate j hat. So how could I possibly do temporal distance learning-- yeah, John. AUDIENCE: Is that volumetric interpolation? Barycentric has smaller-- RUSS TEDRAKE: OK, good. So you can do barycentric in three or four, actually. AUDIENCE: And here, you have-- this would be-- you'd have three points, though, right, in this problem? RUSS TEDRAKE: This would still be called barycentric, but it's true the barycentric we did before was just, you take the neighboring three points. It's true. So actually, you can do barycentric with the three neighboring points or the four neighboring points or whatever. The volumetric is actually also called-- is also a barycentric interpolator. But you're right. I should have been more careful. The way we did barycentric before is we took the three closest points, not the four, right? Taking the four is also a good interpolator. It also is called a barycentric interpolator, actually, right? And the question is just how it grows with the state. Most people use the three closest points because then, in higher dimensions, it's always n plus 1 points instead of the powers of n. Good, thank you. OK. So I guess, then, the domain of attraction is more like that or something, right? For every cross-hair, there's a-- if you're doing the triangle interpolation, then it's more like that. Yeah? So what do you think? So how can we make an analogy between going through discrete states and increasing eligibility? This eligibility is really-- I just need to remember that that state was relevant in my history of costs. Can you think of an analogy of how this function approximator might play into those equations? What I want to get to with function approximation, I want to get an update for alpha that has the same sort of form. This j, remember, was-- in the Markov chain case, that's a vector j, where each element of the vector was the cost to go estimate for the i-th node, right? Now my function approximator is-- again, I'm trying to estimate a vector alpha, but each alpha could potentially affect my estimate in multiple states. The power of function approximators is you don't have to consider every point individually. You get some generalization. One parameter affects multiple outputs. OK? So what could possibly make this tick? How would you do temporal difference learning with function approximators? Yeah. AUDIENCE: [INAUDIBLE] Before, we were basically doing it through basis functions that were deltas at every-- RUSS TEDRAKE: Good. AUDIENCE: --point. So you can break j hat up into be something that is actually based on these other basis functions. RUSS TEDRAKE: So this should be-- whatever we come up with, this should hopefully be the limiting case where, if my basis function was delta functions, you'd like to get back to that? I think that's, unfortunately, going to be a-- AUDIENCE: OK. RUSS TEDRAKE: No, no. Well, it just happens I'm going to be taking gradients, so-- but yeah, that's very-- that's a very nice observation, actually. What's that? AUDIENCE: I think you can't erase because that [INAUDIBLE] Kronecker delta, so it's-- RUSS TEDRAKE: Exactly, yeah. AUDIENCE: By delta function, he meant infinity or 1? RUSS TEDRAKE: That's what John was pointing out. So he thinks it's going to be a Kronecker delta because it's going to have-- I guess it could be 1, yeah? AUDIENCE: So if it is 1, it's actually-- it would be mapping from a Tableau representation to the actual function approximation. RUSS TEDRAKE: Yeah. AUDIENCE: But having features that are just 1, and you say-- RUSS TEDRAKE: No, I think that's-- I think it's a very nice observation. If we think of each feature as having height 1 here and domain nothing, right, then that should be the limiting case where we get our Markov chain. I think that's right. AUDIENCE: So if you just changed to taking, at each step, a weighted sum of nodes-- RUSS TEDRAKE: Yeah. AUDIENCE: --then mapping that weighted sum, [INAUDIBLE].. RUSS TEDRAKE: Yeah, I mean, that's effectively right. So the way that you can do it turns out to be a little bit-- again, a little bit nice and magical, OK? So it turns out-- so there's a couple of ways to think about this. One of them is when I'm going through the Markov chain, every time I get here, I'm going to think forward about-- I'm going to do a one-step prediction, I'm going to do a two-step prediction, I'm going to do a three-step prediction. And what happens is that these eligibility traces are just this trick which makes all that work. If I just remember where I've been, then I can make an update, which is the same as looking forward. Instead, I'm looking back in time with my eligibility traces. In the function approximator case, doing exactly what you said turns out to be equivalent to remembering how important that parameter alpha i was in your recent history. OK? Does that make sense? If I remember the contribution that alpha made-- let's say one of the elements of alpha, alpha i, alpha j-- made in my recent history, then I can update alpha in the same sort of trick that this eligibility trace worked on for. AUDIENCE: Would you do some kind of decaying exponential? RUSS TEDRAKE: Yeah, yeah. AUDIENCE: That's kind of what it's doing there. RUSS TEDRAKE: This is exactly what it's doing here. And here, we had the special case where when I visited the state, I got a 1, and when I didn't visit it, I get here. I got nothing. It just decayed. And what we're going to get now is an eligibility trace on alpha. Do you have it? Yeah? AUDIENCE: Well, I mean, does this thing in brackets need to be changed at all? RUSS TEDRAKE: Excellent. AUDIENCE: It seems like delta-- RUSS TEDRAKE: The eligibility trace changes. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Perfect, right? This is gamma. It's going to do a decaying exponential still. You want to forget things. This thing is now-- the new eligibility trace here is the same size as alpha, not the number of nodes in the system. And the amount that I'm going to credit each alpha with is the gradient of my estimate. Does that sort of make sense? Yeah? In the case of linear function approximators, the gradient of this is just phi of x. Partial j, partial alpha. Just gets rid of that alpha. OK? So I remember the relative contribution of each of my alphas in the recent past, and then, based on this e, I make the same update that I did before. Just copy this down. But my e is now the eligibility on alpha. AUDIENCE: [INAUDIBLE] based on that one to the one that gets visited and zero to the others. RUSS TEDRAKE: Which is what? What's that? Yeah? OK. So that is an intuitively correct algorithm, I think. So it seems pretty natural, using the eligibility argument, that this could work. Proving that it works turns out to be a pain. It's not actually an update like we would normally have. There's a special case. So in one case, when lambda equals 1, then you can actually show that this TD update with linear function approximation is doing gradient descent on the difference between my j lambda-- what I call j lambda-- and my other j squared. OK? So in the lambda versus 1 case, it actually is doing gradient descent on the Monte Carlo error. In every other setting of lambda, the algorithm is not a standard optimization framework of going down in some steepest descent kind of approach. But it tends-- in some cases, it works faster because it uses previous information in a heuristic sort of way, but it does it very effectively. And people have done the work. In fact, the work was done upstairs by Tsitsiklis and Van Roy in '97 or something like this to prove that, for all lambdas between 0 and 1 in linear function approximators, that this update will go to the true value estimate as your system runs. OK? AUDIENCE: Did they mention that the [INAUDIBLE]?? RUSS TEDRAKE: They should. AUDIENCE: [INAUDIBLE] I guess different algorithms can reach a different result, and lambda equals 1 only converges to the actual thing that you're looking for. As you start varying lambda, you converge toward different things, but it's still converging [INAUDIBLE]. RUSS TEDRAKE: That's possible, but-- AUDIENCE: Different lambdas? RUSS TEDRAKE: I mean-- AUDIENCE: Once your [INAUDIBLE] is correct, doesn't lambda not matter anymore? Because it's going to project to the future-- RUSS TEDRAKE: Yeah, the only stationary point should be the true cost to go function. So if it converges-- I'd be surprised if that's true. AUDIENCE: Your lambda [INAUDIBLE]---- RUSS TEDRAKE: I mean, Alborz has done the stuff, so you might know it better than I do. But I was under the impression it converges to the true value estimate. I mean, it's all based on these contraction mappings, and I think the stationary point of the contraction is the true value. If you find out differently, then tell us next class, definitely. OK. So what just happened? So I made a trivial change to the algorithm. In fact, in some ways, it looks almost easier. It's one less line. And it now suddenly works with linear function approximators. So I don't have to feel like I discretized my state space. I can cover my state space with radial basis functions. That might be as painful, by the way, as discretizing the state space if you have to put a lot of radial basis functions down. But I could also do it with more complicated-- I could do it with polynomials, I could do it with Fourier bases, and potentially, things that work with less basis functions over a very large domain. And now suddenly, the tools scale up to little bit higher dimensional systems, as high as you can imagine. As creative as your basis set allows you to be. To complain about these algorithms is that they're inefficient in their use of data. So if you think-- certainly when we're shooting planes and they break, or if we're using a walking robot and it falls down a lot, then every piece of data should be treated with reverence, right? This is hard-earned data. These algorithms, as written like this, basically, every time they visit the data, they make some small incremental update, and a throw it away and keep moving. And so, by no means are they efficient in data. They are efficient only in the sense that they use previous estimates of j hat to bootstrap, but there's no sense of remembering all your data and reusing it. So some people have thought about replaying trajectories. So if you store all your trajectories-- let's say I ran my plane 10 times, well, I'll remember all that data and I'll just run the TD update over and over again over those same 10 trajectories. That's a reasonable thing to do. But again, with linear function approximators, you can do better, right? You can do LSTD, least squares temporal difference learning. This is least squares temporal difference learning, yeah? The argument basically goes like this. So in learning, there's always a difference-- people like to distinguish between online versus batch updates. So the online-- this is an online update. I took my piece of data in. I immediately changed alpha and then I spat it out, right? You could imagine a "batch" update, which collected a bunch of these trajectories. Let's say it ran a whole trajectory with the plane. Stop. Now process that trajectory, make a change in alpha, and then repeat. So instead of doing it at every single time step, let's collect a little bit of data and then make an update, OK? So let's just write down what a batch update-- the batch update for this system, if I ran a trajectory and then made an update, that update would look like this, just by applying that rule a bunch of times but without changing alpha in between, right? So another way to say it is I'm going to hold alpha fixed, collect up all the changes I want to make at alpha, and then, at the end of the run, make one change to alpha, OK? Well, then that change, if you do it in the batch mode, is just going to be a sum of all these guys. That's a scalar times a vector. So I can reorder it without any pain. Oops. I got to put that inside. Let me write out the intermediate step here. j alpha ik plus 1 minus j alpha ik. Agree with that? That's the batch update of that. I just collect them all up, I sum them over k here, all my time steps. That's what my update's going to look like. If I write this a little bit-- if I break into my function approximator there, I can write it like this. hm over k. Oh, boy, I forgot my ek over here. Sorry about that. There's an ek there too, right? j is just phi times alpha, so I'm going to break into that. If I collect those terms, then I get something we can think about again. Sorry, a times alpha, where this is b and this guy here is a. In other words, the long-term behavior of this system, if I collect the updates and then make them like this, well, this looks like a low-pass filter, really, that's going to this solution. Yeah. The steady state is alpha equals a inverse phi, where you do that inverse carefully. svd or something. OK. So that observation led to this least squares temporal difference learning algorithm, which said instead of chewing up every piece of-- every data point and spitting it out, remember everything that you have experienced in the past. And instead of doing this sort of incremental step that goes epsilon towards the steady state, you can solve for that steady state every time you have-- if you collect b and a, you can just collect that with a bunch of data, go ahead and jump right to the solution. So LSTD, what I think a lot of people would consider the state of the art if you just want to do policy evaluation, build up b and a as you run. Compute alpha is a inverse b whenever you need a new estimate of alpha, OK? Why do you want to do that? It's more efficient with your data. You remember and replay your last data seamlessly. You don't have to guess some silly learning rate. And it doesn't even depend on your initial conditions anymore, your initial guess at alpha. OK? So if I went to go and do-- if I were given a robot that's currently performing some feedback controller. Let's say it's stochastic. It's a stochastic system. There's noise and everything like that. And I wanted to just evaluate how well it performed on this cost function, some cost function that someone gave me. If [INAUDIBLE] says I got this robot, I wanted it to optimize this cost function. How is it doing? Tell me how it's doing. I would look at that-- I would look at the state space. I'd try to come up with a linear tiling of radial basis functions, or some linear function approximator over that state space, and I'd start running trials and I'd keep those trials and store up these matrices, and do a least squares temporal difference update. And this result basically tells me that it's going to do the same thing as the TD. It's going to do it potentially more effectively because it's more efficient with data, and it's going to do it without having to guess initial vector or having some learning rate. AUDIENCE: Do you actually have to define the length of an episode, for example, if it's a periodic system like a walking system? [INAUDIBLE] RUSS TEDRAKE: That's really good. So Alborz here has actually written a paper on incremental LSTD. So you might think that doing that update is expensive, and it can be if you just naively do that. But you can do recursively squares, and you can sort of make a cheaper online update to try to do this, to make this a constant update of alpha. If you choose to update alpha at every step, which you could choose to do, and maybe you would choose to do on a non-episodic task, or maybe in walking, you do it once per step or whatever, then you can do it nicely with an incremental LSTD. And you can look at Alborz's paper to figure out how to do that, which is an approximation of the true LSTD, but a pretty good one in your experiments, right? And much more efficient, right? Yeah, or there's a lot of time-- I mean, I think, in walking, it turns out I would do it probably once per step or something. There's natural discretizations. But there's nothing to say, if you have the computational horsepower, that you couldn't just do this every step too. It's just more expensive than doing an incremental version. OK? So function approximation's very powerful. This is what's going to take our tabular based ideas and our Markov chain ideas and make them scale to the real world. This is the first year I did function approximation first in the temporal different learning case, but of course it's relevant for the policy gradient world too, right? John showed you different function approximators that were doing reinforce. Instead of parameterizing my value function as a function approximator, I could have also just directly parameterized my feedback controller as a value function, and done gradient descent if I had a model, or reinforce if I didn't have a model. Function approximation is supposed to be the savior of reinforcement learning. The problem is there's limited results. I mean, the linear function approximation is really the only case we have strong results for most of these cases. So I'm going to talk about doing the policy stuff with function approximation on Thursday, and then it culminates in actor-critic, where you do both function approximation in the policy and the value function simultaneously. That'll happen sometime next week. Excellent. I'll see you next week. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_11_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: OK, welcome back. At the end of last time, we said we'd built up a pretty good arsenal of optimal control tools. Now, we're going to start pushing them in exciting directions, OK? So many of you know I work on walking robots. It's one of my passions. I think it's one of clear unsolved problems in robotics, right? We have so many things we can do with robotic arms. Relatively, we have almost nothing that we can do with walking robots. Walking down the street is still a challenge, OK? So in my mind, one of the essential problems in robotics, it also happens to be the thing that animals do just incredibly well. And even the really dumb animals, you know, locomote very, very, very well. So some people might even argue it's a predecessor to higher cognitive function. We'll see. OK, so there's lots of good reasons why walking robots are tough. There's mechanical problems, of course. Suddenly, if you're a walking robot, you have to start carrying your actuators. So you didn't have to do that as a robotic arm. And that's created lots of interesting work in actuator technologies. How do you carry an actuator that can produce torque accurately, let's say? If you have to carry its own weight, that's been a challenging problem. There's lots of interesting work there. But I will stand by. I think that the reason we don't have walking robots today is not actuators. It's not power density. These are problems, but I think the fundamental challenge is in control. OK. And we are now well-suited to make progress on that control problem. OK. So based on what we've learned so far, walking robots aren't that different than the Acrobot and the CartPole. There's two things that are different dynamically speaking about a walking robot. The first is we have to think about limit cycles instead of just trajectories. And we're going to talk for some time about that. The first thing was limit cycles. The second thing is impacts. Walking robots, every time their feet hit the ground, we have to start dealing with impact. So dealing with periodic motions and dealing with impacts are the two big technical hurdles. And they're not even that big necessarily, but they're the technical hurdles of taking the techniques we've already developed and applying them to walking robots. And impacts more generally might be called sort of hybrid dynamics, where you have a mixing of discrete time and continuous time dynamics. OK. So before I launch into some of the models of walking, which we'll get to pretty quick, let's do a toy example to start thinking about limit cycles and limit cycle stability. OK. So you can get your head around limit cycle stability with some very simple oscillator type circuits. My favorite simple oscillator is the Van der Pol oscillator. OK. So it's a second order system. These are the standard equations for Van der Pol oscillator where mu is some parameter you can choose. You can look at it pretty quickly and start thinking about it almost like a very simple spring mass damper system. The only complicated part is it's got state dependent nonlinear damping. So think about it almost like a spring mass system, but it's got nonlinear damping, state dependent damping. And in particular, if you look at it, you can think of it as having, if the magnitude of x is less than 1, then is that positive damping or negative damping? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: There's a sign change in the word damping somewhere in there, too. It's going to add energy to the system if x is less than 1, which I think is negative damping typically. And if x is greater than 1, it's going to remove energy. It's going to dissipate energy, right? So it's not too hard to believe that you're going to end up with something that oscillates. If it's adding energy for small x's and removing energy for large X's. It's not too surprising maybe that you get this thing going. And it starts oscillating. I might as well show you the real trajectories instead of drawing them badly. So if I do a phase plot of this system, which is two-dimensional-- it's got a walking robot in the middle. Sorry-- close figures. There we go. OK. And yeah, that's good. That's excellent. OK. So these are three plots, the three trajectories from three different initial conditions, one that started off with some large negative initial conditions in both theta and theta dot-- or x and x dot, sorry. And damping took the energy out and started going to this stable oscillation. From another very large trajectory, it did roughly the same thing, went to that same basic oscillation. And then even from very small initial conditions near the origin, it'll swing out and start going to these same characteristic oscillation in phase space. So this is exactly the picture you want to have in your head of a stable limit cycle behavior. It's a periodic motion. When I'm away from that motion, I return to it, right? This is exactly what we want to capture by saying a stable limit cycle. It turns out it's not sort of trivial to write down what you mean by a stability in the limit cycle context. We'll see more about that in a second. But first, if you just sort of-- let's just use our imagine and talk about some asymptotic stability to this orbit. Does anybody know it from this system whether this system is globally stable? Why is it not globally stable? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Good. I think it's true that it's globally stable except for a set of measures 0 is the way you say it, right? Because, darn it, if you start at the origin, it's not going to get there. It'll never start moving from the origin. Any other initial conditions will find their way back to that stable limit cycle. So why is it not trivial to say what I mean by a stable limit cycle? Well, let's plot that same system in the time domain, same initial conditions. Everything is the same. Here we go. Same exact trajectories, you can see they all go to their characteristic oscillation pretty quickly, in fact, right? But you can see it's crystal clear here that the trajectories never synchronize themselves in time. And that's not too surprising. So you're on the limit cycle. If there's two trajectories that are a different place in phase on the limit cycle, there's nothing that's forcing them to come together. So they'll continue to be apart in time. Such a simple thing, but actually that complicates our ability to define stability. We can't talk about trajectories asymptotically converging anymore because they don't. They asymptotically converge to some manifold, but that's harder to describe. As such, we actually have relatively few good tools for describing limit cycle stability. The standard tool-- well, actually let me say a few things more about limit cycle before I tell you the standard tool. But there's actually a few things that are interesting, just good general knowledge, about limit cycles. It turns out, if you have a closed region in state space, meaning trajectories that are inside this region never leave this region, and if that region of state space has no fixed points, then it's got a closed orbit. There's got to be something. It doesn't have to be a stable limit cycle, but it has to repeat itself. If you stay inside a region forever and you don't go to a fixed point, then you have to have a closed orbit. That's the Poincaré and Bendixon theorem. And it's also interesting to note that, if you have a dynamical system described by a potential function, like any system that's described by Lyapunov function, that cannot have a limit cycle. Anything that's uniformly going down a gradient can't have a limit cycle dynamic. So a lot of Lyapunov theory doesn't apply directly. And another thing is that people don't always call any closed orbits-- these are just random factoids about limit cycle before we go on, but they're good culture here. But if you have a closed orbit in state space, if it's stable, people call it a limit cycle. If it's unstable, people call it a limit cycle. If it's marginally stable, people don't really use the word limit cycle. So for instance, on the undamped pendulum, the marginally stable orbits, there's no stability properties on these. People would typically not use the word limit cycle to describe those closed orbits on the phase space. But anything that has the sort of converging or diverging property around a single trajectory-- so what's another good way to say it? There's some words that I liked about the trajectories have to be isolated in space, basically. If these closed orbits are not isolated in space, if there's not something sort of separating between them, then people don't use the word limit cycle to describe it. OK, so we can't talk about asymptotic convergence of trajectories on limit cycles. We can't use vanilla Lyapunov functions to talk about stability on limit cycles. So what do people do, right? There's a standard trick that most people play. And that's the idea of Poincaré sections and Poincaré maps. How many people have seen Poincaré maps before? OK. Here's the idea. So I've got some orbit in state space. This is XXed out of my Van der Pol oscillator. Instead of talking about convergent to some orbit, I'm going to define a particular surface in this state space. Let's say I define-- that's a bad choice. I've got 40 colors here. I had to pick the one that was-- that's not a very good choice either. Green's got to work for me, yeah? They're all fairly toned down. OK, let's pick a particular section in state space. Instead of talking about the convergence in this continuous dynamics to this orbit, let's just look at the dynamics of that limit cycle every time it crosses this section. OK. This is called the surface of section, often referred to as just some big S. And a surface of section, if it's the case that all trajectories that leave the surface of section eventually return to the surface of section-- I'm going to write this down-- then defining a surface of section allows you to convert the continuous time stability of a limit cycle into the discrete time stability of a fixed point. So we know a lot about defining stability for fixed points. The big idea is we're going to turn limit cycle stability into fixed point stability. And the name of the technique is the Poincaré map technique, let's say. OK. So I've got some continuous dynamical system. Let's make it simple. Let's say there's no control to speak of, no time dependence. Let's just keep the simplest sort of analysis. I've got some continuous time dynamical system. Let me call tc the time of crossing the surface of section. So I'll say tcn is n-th crossing of S. That allows me to define a discrete time system, which I'm going to call xp of n. xp of n is x at tc of n. I'm just trying to be very careful about my notation. I'm going to use p for any time the vector x I'm talking about lives on the Poincaré map. I'm trying to be careful about the transition from continuous time to discrete time. And the result is a discrete time map, which I typically call P for Poincaré map. It can be linear or non-linear, right? In general, it's non-linear for us. Defining that surface of section defines for me a map which says, if I cross the surface of section at the n-th time at state xp, then the next time I cross it's going to be xp n plus 1, right? So if I'm on the Van der Pol oscillator idea, let's say xpn is something here. I've defined my surface of section, in this case, to be the place where x equals 0. x dot is greater than 0 just to define only half of that line there. It's a line segment. This thing we saw is going to go around like this. It's going to get a little closer to the nominal trajectory. It's going to come back here. So this mapping is what I'm capturing at p. If I go around again, I'll come here, yeah? The key idea is that, if that map exists for all states for any time on this map-- I go around, and I can get back to the map-- then knowing that you have stability on this map that points in xp, if xp as it evolves in time goes to a fixed point, then that must also imply that you go to a limit cycle fixed point in continuous domain. Nothing else is allowed to happen. If I'm on this state in my continuous time world, then there's no time dependent functions here. There's no other state that I'm not looking at. If I'm on this trajectory, I'm going to stay on that trajectory. Of course, a disturbance or something could knock me off, but we're just looking at the nominal system. So fixed point stability on the map implies limit cycle stability in continuous time, OK? Awesome. Yeah, I don't really need that yet. If I work out the map for the Van der Pol oscillator, let me draw this map now as xp at n-th step going to xp at the n plus 1-th step. At 0, it doesn't go anywhere. So that's an easy point to draw. I know it crosses 0. I'm being a little loose here. So what I really care about is, since I know the position is 0, really what I'm drawing is sort of the velocity at that, let me call it, xp dot. Should I? Well, xpn has two elements the way I've defined it here. Only one of them has anything interesting going on. I'm not going to write a dot on top of my discrete time system just because I don't like that notation. But I'm going to draw the only interesting variable of the two-dimensional object xn. If I look at my velocity at the n-th crossing going to the velocity of the n plus 1 crossing, it turns out, if it's in here, I know from simulation there that it gets bigger every time. Getting bigger on this discrete map means that it's greater than the line of slope 1. Let me draw the line of slope 1 here. On the Van der Pol oscillator, it starts off getting bigger, bigger, bigger. And then at some point, it crosses here and goes off like this. I think I can even tell it to generate that, so you believe me, Poincaré. And I did it on top of my other plot again. So that's the Poincaré map now of this system. The blue line, this is xpk versus xpk plus 1. And that red line is the line of slope 1. This is a luxury of the sort of low-dimensional systems here. If I have a map from one variable to another variable, then just like the flows on a line, this is a discrete time map. But I can actually analyze everything graphically. So what do I know? I already told you by writing that equation that, if I look at my Poincaré map there, the place where it crosses the line of slope 1 is a special point. What is it? AUDIENCE: Fixed point. RUSS TEDRAKE: It's a fixed point, yeah. I've actually got another fixed point here at this, the origin. Just by inspection, I can tell you about the stability of that fixed point. So when I'm over here, I'm getting bigger. When I'm over here, I'm getting smaller. So I can actually, just by inspection, tell you that that's going to be a stable fixed point. We have to be a little careful about it. I'm going to show you. In fact, the right way to think about it is the staircase technique. OK. So if I want to follow the dynamics of this iterated map, how many people see the staircase pictures? Yeah. OK, so let's say I start in my system, and xp is some small number here. And the right way to think about that is the next time I'm going to go to this new value I can sort of graphically copy that down to my next xp by drawing a line to the line of slope 1. And then I can, from there, evaluate where the next xp is going to be. And now it goes right into the fixed point. That should have been a little flatter than I ended up making it. And then on this side, similarly, I can got to here right into the fixed point. Yes? AUDIENCE: Could you kind of emulate a flow diagram that plotted xpn plus 1 minus xpn? And then instead of reaching the line of slope of 1 when it crosses the-- RUSS TEDRAKE: Absolutely, sure. Sure, you could subtract out xpn from there and get it in terms of 0s. Yeah. The thing that might be misleading about that, you have to be careful. Because discrete time systems can do something that continuous time systems can't do-- is they can jump, right? So I could actually define something that then looked a little benign, but is actually unstable. If I had something that looks like this, what does that do? You go here, here. And you know, well, I guess it goes down to here. That one might actually catch-- shoot, I didn't make my point. You could limit cycle here like this or something, right? I did a more dramatic plot to make my point. It's pretty funny that it's all unitless. So the fact that I gave myself that problem is ridiculous. You can imagine starting off here, getting bigger and bigger, and off you go, right? OK, so what are the conditions for stability, local stability on that fixed point? You all know what they are, but just now you can maybe see them graphically by my exaggerated example. What are the conditions on the local stability? What does the slope of that line have to be? AUDIENCE: It has to be less than 1? RUSS TEDRAKE: The magnitude has to be less than 1, right? It could be sloped like this, sloped like this. What matters is that it's not sloped past the line. The other, the simpler thing, to draw for instability would be if my plot had gone like this, that it's clearly unstable, right? Because then I'd go like this, this, this. Then off I go to infinity. But it also happens if I'm unstable like this because I oscillate myself off to infinity. If I look at the gradient of p with respect to x at some fixed point, the eigenvalues of that had better be less than 1 for it to be stable. Yeah. AUDIENCE: So if you're linearized, then you're basically at that point. RUSS TEDRAKE: So that's a statement about local stability. One of the cool things about these maps, actually, is people have stronger-- certainly, graphically I did sort of a stronger thing. I said I could just look at this and know that it was stable. People actually know quite a bit about the properties of these maps. Just again for culture, it turns out, if you have a unimodal map like this, if the gradient doesn't change same and if it's locally stable, then you can infer global stability. There's a citation in the notes where you can see the careful result of that. Some of the Koditschek's hopping work exploited that result. So people do know some things, more strong global properties of these maps. But the trivial one to think about is this local stability. OK, so we know a bit now about how to talk about whether a limit cycle system is stable. It's a little more subtle than fixed point stability, but not too bad. The hard part comes if you can't find a mapping where it always returns to itself. And then you have problems, right? But in this case, you can say strong things about the stability. OK. Now, we're set up to talk about walking, so my favorite topic here. Let me actually say historically, I mean, people have obviously been interested in walking for a long, long time. Some people say that the first sort of serious study of legged locomotion was by a photographer. What's his name? AUDIENCE: Muybridge. RUSS TEDRAKE: Muybridge, good. Yeah. So a photographer had a sort of a fast shutter speed camera. And he had animals run in front of graph paper, basically. And he tried to win a bet, I think, was the story. Because people weren't sure, when horses were doing their sort of certain gait, whether they-- I think it was a gallop. They asked, when the horse gallops does, he ever actually get all four feet off the ground? And the guy who sort of proved that was a photographer, that took a picture of all four feet coming off the ground. And you can get his books, which are these fantastic books of all sorts of animals, just pictures and pictures and pictures of all sorts of animals, sort of almost stroke photography of these animals walking in front of graph paper. You get elephants, and camels, and humans, and babies, and horses, and dogs. You name it. So if you ever sort of are curious what the gate of a certain animal looks like, you can go to Muybridge. And he also sort of started defining this language for describing the gaits of different animals, like which feet are on the ground at which time and stuff like that. So this was in the turn of the century, the late 1800s, early 1900s. This photographer sort of started our field. More recently, though, the first sort of serious modeling, which is the impact that sort of remained today, I think, was McMahon at Harvard. He was a biomechanicist. And he observed that EMG in your leg, if you look at the muscle activity, Electromyographic activity in your leg, so just the electrical signal sent to your muscle, that your stance leg has very little EMG-- sorry, a lot of EMG. And your swing has very little EMG with the exception of, right when you begin swing, there's a lot of EMG. And right towards the end of swing, there's a lot of EMG activity. So your muscles are doing seemingly a lot of work at the beginning and end, and your stance leg is doing a lot of work. But the surprising thing was that inside of phase your muscles almost turn off. So McMahon produced for the world the ballistic walker model. He was at Harvard. And he basically did some modeling of a three-link pendulum, something like that and showed that, even with a straight leg on your stance leg, if you let your swing leg just go, that you can get very natural gaits coming out, very reasonable kinematic descriptions of what human walking looks like, hence the name ballistic walker. This is a passive phase. So one of the dominant ideas that came out of that work was the sort of notion that people talk about now as walking by vaulting. If you want a first order approximation of what walking is, they said think about your stance leg as a rod. And you just vault over your stance leg. And in fact, peoples' center of mass do tend to go up over the course of a stance and then back down. So the dominant theory really was that you could think about this as a stiff leg and then an almost passive swing phase. Now, I remember last year's dynamic walking meeting. Basically, half the talks were saying walking is not vaulting anymore. It's just in the last year or so people are really, really changing their mind about this, including you guys in Media Lab are talking a lot about the role of compliance in the stance leg in energy storage during walking, which was left out of these initial models. But I think, if you want a 0 order model of walking, thinking about your stance leg as straight is not a bad way to start. And the other thing that happened because of this work was this idea of passive dynamic walking. So this guy Tad McGeer-- so McMahon's model was just a model of the swing phase. It didn't talk about stability. It didn't even talk about ground contact. It wasn't sort of a periodic motion. It was saying something about the kinematics of swing phase. And this guy Tad McGeer, who's actually an aeroengineer, he works on UADs now, came in and turned the robotics world on its head by building a bunch of machines that had no motor, no controller, and walked down a small ramp basically using these ballistic walking kind of ideas. So Tad's a good guy. Let him describe. I've got a video of him talking about passive walking here. [VIDEO PLAYBACK] - This familiar toy is a passive dynamic walker. [INAUDIBLE] all the way. And right at the start, it settles into a steady walking cycle sustained by an entirely passive interaction of gravity and inertia. This machine is also a passive dynamic walker. For that matter, I may be a passive dynamic walker. As you'll see, our gaits are quite similar. [INAUDIBLE] with the analysis, and by intuition [INAUDIBLE] exclusively [INAUDIBLE]. [LAUGHTER] God. Once you establish [INAUDIBLE],, the reason the machine failed was a problem called [INAUDIBLE].. So for later experiments, we went to a set of mechanical patches. And the next sequence shows our first trial with those. I guess that worked. [END VIDEO PLAYBACK] RUSS TEDRAKE: That's a big deal. That started a lot of research. AUDIENCE: Is that 1990? RUSS TEDRAKE: 1990, exactly. [VIDEO PLAYBACK] - This familiar toy-- RUSS TEDRAKE: Whoops, sorry. - [INAUDIBLE] today is that we have all four debouncers [INAUDIBLE]. RUSS TEDRAKE: The knee latches the last time are the debouncers of today. - [INAUDIBLE] RUSS TEDRAKE: No motor, no controller, that's just falling down a ramp. That's beautiful. I love this video, because it shows you sort of the honest capabilities of these things. There's a few outtakes. [LAUGHTER] That's what it's like working with them. That was pretty benign. It's a pretty robust phenomenon, right? So this is-- [LAUGHTER] --he's changing the ramp angle quite a bit and getting these really graceful motions out. This guy walks sideways for some reason. It's really good, yeah? [END VIDEO PLAYBACK] You got to realize this was at a time when the walking robots of the world were going like this, moving like that. This guy says, I got a robot with no motor, no controller. Look what it can do, yeah? Really amazing-- and he was an aerospace engineer, go figure. This is stealing the show. That's the next bit. So these passive walking ideas have really had a big impact on the world of walking robots. They had it in two waves, I guess. Everybody knew that a purely passive walker could walk down a small slope. And then people have been working for-- the second part of that video was a video. Finally, someone had put motors on it to make it walk on the flat because it took a surprisingly long time to make that leap. So now, people, I think, don't just think they're a nice party trick, but they actually may be useful for designer of real walking machines. AUDIENCE: Why [INAUDIBLE]? RUSS TEDRAKE: Well, because they don't know how to underactuate a control. I think, quite honestly, that's my answer. Yeah. We don't have any good control ideas. I mean, what you want to do here is just push a little bit of energy into the system in the right way. We don't know how to formulate those control problems when the systems are so non-linear and discontinuous and everything. Plus, I think the people that are good at building those things maybe are not the same people that are going at control. It takes a special talent to turn enough screws. So you should know that the way that the knee stayed attached-- how do you think the knee state attached in that? Why did the knee not bend during the stance phase? AUDIENCE: Magnetic or something? RUSS TEDRAKE: It wasn't magnetic, but that's close. You can imagine, maybe, the curvature of the foot doing it, but that's not it. They're suction cups, yeah? It goes. And the suction cups don't just stick. You poke a hole in the back of the suction cup. And you tune the leakage of the suction, right? So that at the right time, it pops off and goes like this. This is not for the weak of heart. There's a lot of tuning that goes into these things. So our most dramatic failure, I think, was we were doing a point foot version of McGeer's walker. And for the point foot to work and get this nice swing without the curved feet, we had to put a lot of mass on the upper leg, not a lot on the lower leg. So we had these little fiberglass lower legs and big heavy upper legs. And our failure was not just sort of falling down. It was exploding the lower leg into pieces-- [LAUGHTER] --right, and completely crashing. John, you remember that? Vanessa's walker just kind of-- I'm the one that pushed it, too. I had a reputation in lab for a little while. I had just broken Rick's airplane. And I went around and turned around and broke Vanessa's walker. And they're all on tape. [LAUGHTER] So McGeer, apart from building these beautiful machines, gave us this beautiful model to think about the essential dynamics of walking. The ballistic walker told us about the swing phase, but that's actually sort of secondary. What really matters is this passive interaction between inertia and gravity, as Tad said. And it turns out you can understand that if you just think about this little system called the rimless wheel. So I'm going to put it on a slope of angle gamma. The model is going to be a simple pendulum with a mass at the end, but a massless leg. We'll define theta to be this, the angle from vertical. Now, I sort of despise the fact that my thetas-- I'm not using the right-hand rule here. But otherwise, you end up walking backwards with negative velocity. So this is the way it is. So it's almost like the simple pendulum except my coordinate system's from the top and theta's sort of reversed. And in addition to this simple pendulum, I've got a few extra legs that are all massless. And they're separated by 2 alpha, an angle of 2 alpha. There's actually reasonably good pictures of this, cartoons of this, in the notes. So you can think about this as a bicycle wheel where somebody took the rim off and just left the spokes. So what happens if you take this bicycle wheel and you put it at the top of a small ramp and you give it a push? AUDIENCE: It falls over. RUSS TEDRAKE: It falls over. OK. Let's say it's constrained to be in the plane. Yeah, that was a fair answer. So we made an additional assumption that it stays upright. Then what happens? AUDIENCE: It starts rolling down. RUSS TEDRAKE: Starts rolling down-- OK. Is it going to roll faster and faster forever? AUDIENCE: No. RUSS TEDRAKE: At some point, losses are going to catch up with it. In the bicycle wheel, you might think of it as rolling friction. In this model, we're going to do it by just a bit of impact, which are very real. But that's actually the only thing we need. We're not going to model any damping in the system. So we can really think about this system as-- let me list all my assumptions carefully here. We're going to assume that the foot, so to speak, the stance foot is a pin joint. So I'm going to artificially assume that that foot doesn't come off the ground. As soon as it catches the ground, it turns into a pin joint on the ground. Instantaneous transfer for support-- which means, as soon as this leg comes around, hits the ground, it becomes the pin joint. And this guy is free. I only think about one ground contact at a time. I assume that is an instantaneous transfer support. The limit of standing on two legs is just model that's switching back and forth really fast between those two. And it works out OK. And then I'm going to assume that the collisions with the ground are inelastic and impulsive. Impulsive means instantaneous. Inelastic means that all energy going into the ground is lost to the ground. There's no bouncing. And I need that, so to make sure that there's always exactly one foot on the ground in this model. As soon as the thing bounces, we built rimless wheels with two wheels. That way they don't fall down in practice, just like McGeer's biped had four legs. They call it a four-legged biped. [LAUGHTER] But that way you don't have to worry about falling over sideways. And on the real rimless wheel, you put it on a ramp. It bounces like crazy. So this is just a model, but it actually captures the essential dynamics. OK. So now, you see why it was good to know everything we know about pendula. How is this thing going to work? I'll show you the answer real quick. So here's my rimless wheel. That was a simulation. So I'm going to start it. Every time I hit Go, it's going to start it with some random initial velocity forward uphill or downhill. You name it. That's going up hill, just enough energy, turned around and came back down the hill. Start speeding up a little bit-- but actually quickly settles into a stable forward rolling speed where the losses from collision exactly balance the energy that's gained from converting potential to kinetic as you go down the hill. That is the essential idea in walking, in certainly passive walking, the conversion of potential energy and the balance with dissipation in the ground. So this is starting with a slow initial speed. And it speeds up a little bit, goes to that rolling speed. You started really fast. It actually slows down pretty quick and goes to that same rolling speed. Can you see where I'm going with this? We've got a stable limit cycle. It's periodic, clearly. It's not going to be stable in the trajectory sense, but it's going to be stable in the limit cycle since. Just I want to see the one other thing that can happen. The cool thing about simulating rimless wheels is you get all these aliasing effects and stuff, right? So if it's going uphill too fast, it looks like it's going downhill. There you go, uphill, uphill. OK, see what happens. Yeah, OK. So that one just ran out of energy and stayed still. That's still my instantaneous transfer of support. The simulation is going da, da, da, da, da, da, changing ground contacts infinitely fast. There are serious simulation people that say that's the right way to handle even-- some say that you should simulate your rigid body-- you should simulate a pin joint by constantly modeling impacts at the joints. This guy Brian Mirtich, who was at MERL for a while, did some pretty convincing work in that world. So that's the answer we're going to get. Now, can we see how that works on the phase portrait? If I look at theta and theta dot, my robot here, there's a couple key angles. Theta is only going to live between-- so it turns out to be gamma minus alpha and gamma plus alpha. See if you can work that out for yourself. So this angle is 2 alpha. When this leg hits the ground, that angle has to be alpha plus gamma. And in the other direction, it's gamma minus alpha. Those are the conditions where this leg is about to change into a different coordinate system. And I forgot to say, when this foot changes and it hits the ground, I quickly redefine theta to be the theta around the new pin joint. I just reset my coordinate system. So there's a couple of key places in this plot. Somewhere there's some small gamma, and that bisects two very important lines here, which are theta equals gamma minus alpha and theta equals gamma plus alpha. My coordinate system is 0 around the unstable fixed point. That's what happened when I flipped my coordinate system to the top. So if I want to superimpose the phase portrait of the pendulum, I said 0 is going to be the unstable fixed point. So I get the i is going out this way. That's the homoclinic orbits if you remember. And the smaller orbits come in and do this. And the ones up here do this. Does anybody have the foresight to sort of jump ahead and tell me where my limit cycle is going to live on my picture here? AUDIENCE: Above. RUSS TEDRAKE: Above, OK. What's it going to do? AUDIENCE: It's going to withdraw from the same place in each [INAUDIBLE]. RUSS TEDRAKE: Awesome, very good. Yeah, very good. OK. So here's everything we just saw in the rimless wheel. If I'm rolling fast, if I start with some initial velocity rolling fast, say, I'm high up on this line, then what happens? I'm going to ride this guy over here. It turns out the collision model, I'll tell you the collision model in a second. But the inelastic collision at the base essentially takes out a fraction of your velocity at every step. It's a linear fraction of your velocity at every step. So I take my coordinates over to here. And then it's going to reset to a velocity over here, but with some fraction of that velocity dissipated. And I get a new trajectory like this. It comes over to something like this. And at some point, that fraction exactly balances. So because this thing is not symmetric, when I start here, I end up a little bit higher with a little bit more velocity at the end than when I started. And I lose a little bit of velocity in my impact. And at some point, those things exactly balance. And I end up with a stable limit cycle that looks like that, right? What else can happen? If I start with too little energy, the whole thing runs out of energy and stands still. What's the critical energy level? AUDIENCE: Homoclinic orbit? RUSS TEDRAKE: The homoclinic orbit, right? If I'm just above the homoclinic orbit, it actually speeds up and goes to that same nominal limit cycle. But if I'm just below the homoclinic orbit, then I never actually make it across to this thing. I just end up going, bonk, over to here. And then it actually comes in-- if you have to think about it, trust me-- like this bonk, like this, and just ends up running out of energy and standing still. AUDIENCE: Could you start just above it, but then the collision knocks it down [INAUDIBLE]?? RUSS TEDRAKE: It depends on the parameters. But in the parameter regime I'm in, which I think is the standard one you want to think about, in order for it to have some stable-- so basically, what you're talking about is a parameter regime where you have no stable fixed point, no stable rolling limit cycle. AUDIENCE: Assuming your velocity-- RUSS TEDRAKE: If I'm just epsilon above here and I go down, then I'm only going to stand still. AUDIENCE: If your collision model was that you always lose a particular fraction of your velocity? RUSS TEDRAKE: Yes. AUDIENCE: OK. RUSS TEDRAKE: So in the case where we have a rolling fixed point, it has to be that epsilon above here actually gets you the other thing. The one maybe you're thinking about, too, is that there's a place over here where, if I start with this energy, I actually come out with a little less energy. And I can actually then go and get caught in either this standing fixed point, or I can transition over into this fixed point and go up and get caught. OK. There's an easier way to see all this. I want you to see it in the phase portrait because we thought a lot about it. The easier way to see what all this stuff is by defining my surface of section and looking at the Poincaré map. So let's do that. Now, in the rimless wheel, because it can be going in either direction, I'm going to define a little bit of odd surface of section. I'm going to define it-- it sure feels like I got a lot of colors, but, I don't know, some sort of red and purple mix. Let me just make it really thick here. Good lord. Let me define this to be my surface of section and this also to be part of my surface of section. So any time I have an impact with the ground, I'm going to use that as my return map. AUDIENCE: So is it just before or just after? RUSS TEDRAKE: I do just after, but it doesn't matter. As long as you're consistent, it doesn't matter. That's a little bit better. Here's the analytical return map of the rimless wheel. I should say the rimless wheel is the one walking system that we actually understand completely. And I mean, when I talk about these Poincaré maps, I can write them down analytically. Because I can tell you, if I'm in this velocity, where am I going to end up because I just have to do a first integral on the pendulum dynamics. It's energy conserving. We're all happy. So I can tell you everything about the rimless wheel. I could tell you about where it's fixed points lie. The gray is actually the basins of attraction of the fixed points. I can tell you everything. There is no other walking model that I know of that I can say that about. This is what you should aspire to get with all of your walking machines. So the red line is the line of slope 1. The blue line is the analytical solution to the first return map with the surface of section defined the way I did. This is post-collision, all right? So it turns out my lines just got thick enough that it's a little hard to see, but there's a blue line here. It says, if I'm going too fast, I'm slowing down. And it'll do its little staircase right down to this black fixed point just above this green line, which is the rolling fixed point on the Poincaré map. This is theta dot at the n-th step versus theta dot at the n plus 1-th step. So there's a rolling fixed point. And it's actually stable. The blue line goes just above the red line before it goes down. There's also a standing fixed point with a velocity at 0, which is the standing on two legs oscillating back and forth. That one's stable, too. And then when you're rolling backwards uphill, it always says I'm speeding up. Every time when I'm rolling up the hill, this is negative initial velocities. Every time, I'm going to have a little bit more forward velocity on every step. But because this thing jumps discontinuously in that staircase fashion, it's actually a little non-trivial about whether it's going to end up in the rolling fixed point or the standing fixed point. Those gray stripes are actually the basin of attraction of the rolling fixed point. So it turns out, if you roll uphill and you stop with just enough energy-- you don't quite get to the top, and then you roll back down-- then you go into the other fixed point. But if you end up at the top of the hill pretty much losing energy like this, then you're going to stop. But I can tell you everything about this return map. What's the green dashed lines? The green dashed lines are places where my return map are ill-defined. What could that possibly be? AUDIENCE: You need balance. RUSS TEDRAKE: Yes, very good, right? There is some initial velocity for which I never return to my return map. What's that? It's if I'm on the homoclinic orbit, right? It never happens practice, no big deal. But there is a set of measure 0, if you start it with just the wrong velocity, it'll go up and sit like this and never come back. So those are ill-defined points in the Poincaré map, but everything else we completely know. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Sorry. What's that? AUDIENCE: We said, like, if you [INAUDIBLE] a velocity of 0, then it would stand still there. RUSS TEDRAKE: Yes? AUDIENCE: Why would there be that line? RUSS TEDRAKE: Oh. Why is there a green line on 0? AUDIENCE: It's on one stripe. AUDIENCE: It's a green line at theta 0, not theta dot 0. RUSS TEDRAKE: It's theta dot 0. So why do I have to draw a green line there? I don't know why I have to draw a green line there. I don't remember that green line being there. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yeah, but I don't think I should have drawn-- I think that's actually just a MATLAB typo. There's no reason why that should be there. I don't know why that's there. It's not in the notes. There's no reason why that should be there. I don't know why that's there. Yeah, John? AUDIENCE: Homoclinic orbit doesn't separate the rolling ones on the right side of the upper right and the base plot. Your homoclinic orbit is the boundary on the left side, but not on the right side, right? This is a-- RUSS TEDRAKE: That's not true. AUDIENCE: The ones on the right side roll when they go back. But they're inside, but they bounce across to the left. [INAUDIBLE], right? RUSS TEDRAKE: OK, good. So there's actually green lines every fraction of the way. Is that what you're asking about? AUDIENCE: I was actually-- RUSS TEDRAKE: Even the first one, this one? AUDIENCE: Because the homoclinic orbit is not the boundary or what marks the stable rolling gates in the [INAUDIBLE] plot, right? It is on left side maybe, but on the right side you can be inside the homoclinic orbit. RUSS TEDRAKE: Oh, that's correct. That's correct. So I can end up here with enough energy to transfer over. That's right. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Only here is at the-- AUDIENCE: Yeah. RUSS TEDRAKE: That's good. Yes, that's true. This is the only place where it's the defining part. The transition across from over here is more complicated, which is why we end up with these trajectories, which get started up going the other way. Yeah. And it's also the case that there's actually stripes of these green lines going up. At every boundary between the gray and the white, there's a place where I could have rolled 10 steps and ended up balanced perfectly and then go back. So I skipped over the impact dynamics. The impact dynamics are good to know, but we can avoid them. But just to give you the intuition, you can define the energy loss at impact by just looking at this rimless wheel immediately before the impact here. And it's got some momentum that it's moving along this pin joint. So it's got to have some momentum like that. Immediately after the collision, it's going to have a momentum that's orthogonal to this new pin joint. So any energy that's going into the ground is lost. So if you model an angular momentum conservation around this point, the angular momentum around this, which is the component of this, here remains. And the other part is lost. It turns out that, in the rimless wheel, you get that as a simple mapping saying, theta dot after a collision, if I use that notation, turns out to be cosine of 2 alpha times theta dot just before the collision. It turns out to be a trivial mapping. So you now know the essential dynamics of walking. This is actually most of what you need to know if you think about a walking robot. The dominant source of energy loss is not friction in the joints. So that's pretty small. I mean, you could make a robot with friction in the joints. But the dominant dynamics of our walking are dominated by impact at the ground even for human walking, right? So most people think that the places you spend the most energy to recover from the ground collision are push off with the toe. And then you're mostly passive through. Ernesto, correct me if the theories have changed. You're mostly passive through. And then there's one other time where you spend a lot of energy, not a lot, but some energy. We actually decelerate your leg before knee strike maybe just to protect your kneecaps or maybe just to not walk like this or something. I don't know. But you actually spend a little bit of energy doing negative work to slow down that swing leg before it collides with the kneecap. And that's it. And the collision dominates so much to the point where, if you want to make an energy efficient robot, Andy Ruina says, you can make it four times more efficient if you put this push off just before the collision. Because if you're coming down and you suddenly use your push off to redirect your energy just before that collision, you can reduce the velocity at collision. And he says that makes a four time improvement in the energy efficiency of your robot. So they've been trying to build walking robots that sense the ground just before by having a little limit switch sticking down in front of the foot and push off right before ground contact. That collision dynamics, what the rimless wheel tells us about, is the dominant essential dynamics of walking. It's the only loss in the system, but it's enough to get us to stable limit cycle out. The mathematicians like this. Phil Holmes like this because he was surprised initially that and that these piecewise-- this is a energy conserving system inside each cycle. It's a piecewise holonomic system they call it. And it can have stable limit cycles. So people have gone further than this now. We can do a little bit better than a rimless wheel. If you understand the rimless wheel and now you want to make it a walking robot, all you have to do is take away the rigid joined at the hip, make it a pin joint. To model the swing leg dynamics, you can add some smaller masses in the leg. So it's now exactly the Acrobot dynamics. You can imagine putting a torque at the hip. And that's called the compass gait model for obvious reasons. And wouldn't you know it? If you take a compass gait, put it on top of a small hill, give it a push, only losses come from collisions with the ground. Otherwise, it's a, even if I have no torque at the hip, passive stance phase. This thing walks stably down a small hill. AUDIENCE: Are you wanting the legs to swing free between [INAUDIBLE]?? RUSS TEDRAKE: Everything is just 0 torque. The only trick I'm playing is that I'm ignoring the foot scuff here, which, if you want to build the robot, you've got to build a little retracting toe to make your leg a little bit shorter when it goes through. But, otherwise, you can do this. And we built these. Now, this guy has the rimless wheel actually. I showed you the basins of attraction. The two fixed points, the rolling and the standing together, are globally attractive except for those sets of measure 0 where you end up standing on your toes. That's not at all true of the compass gait. The compass gait's pretty frail. You have to find initial conditions that put it into this walking gait. I can show you many simulations of it falling down catastrophically. But within some initial conditions, you get a nice stable limit cycle out. And not surprisingly, you can go farther. Put one more set of links in there. This is point feet walking with knees. We modeled a collision at the kneecap and with the ground, but the dynamics are the same. The limit cycles that you see in these guys look a little bit more appealing. AUDIENCE: [INAUDIBLE] of the knee [INAUDIBLE].. RUSS TEDRAKE: It does not work if the knees have-- you need something that keeps it a stance leg. I mean, you might get a totally different gait out. But that could be-- AUDIENCE: Well, I meant just during the swing. But I guess you have to-- RUSS TEDRAKE: Somehow you have to end up with a straight leg. So if it went past and came back, if you were exceptionally lucky in that regard, you might do it. But kneecaps are a good thing, yeah? You'd feel a little silly walking down the street without kneecaps. So the dynamics of this, if you sort of project it onto a single plane, you get this nice long limit cycle of the stance leg. You got an impact, which instantaneously decreases your velocity. Then you get your stance leg cycle which is actually basically a snapshot of your rimless wheel cycle. And then you get another discrete impact which loses energy on the other side. And you get this nice limit cycle for the compass gait. This is the theta of leg one. And it's the stance leg here, and it's the swing leg here. And trajectories the neighbor this converge to it just like you'd expect. The kneed walker has another collision that happens somewhere in the middle of your swing, but it all works out. OK, so I've given you enough sort of, I think, to understand the basic limit cycle dynamics and the basic stability property of these things. Next time, we'll talk a bit about how you'd actually apply some of our previous methods, for instance, to this now that we've got a limit cycle and you've got hybrid dynamics with these switching if you want to do things like compute the gradients to do optimization. And so you have to be a little careful about those impacts and how they affect things, but it's not too hard. And we'll tell you a bit about the state of the art and walking control. They're all sort of based on this model. And then Thursday's the midterm. I promise to have a practice exam for you. It might not be tonight, unfortunately. But it'll be very soon, tomorrow or early Saturday morning, I promise. Yeah. OK. And feel free to ask me any questions. Some of you are finding me to run project ideas by. That's a great idea. If you want to talk about your project, let me know. Let John know. See you next time. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_6_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: Welcome, back. Today we're going to break the mold of these one degree of freedom systems in a major way. We're going to go to two degree of freedom systems. There are two sort of canonical underactuated systems. In fact, I'd say that the field of underactuated robotics spends most of its time thinking about a few canonical underactuated problems. The two we're going to talk about today are the Acrobot and the cart-pole systems. The Acrobot is a two-link robotic arm. The only thing special about it-- it's operating in the vertical plane. The only thing that makes it special is that somebody forgot to put a motor at the shoulder. So you've got a motor at the elbow and no motor at the shoulder. It's called the Acrobot because you can-- it's a little bit like an acrobat on a high bar that has to spin around and do tricks, even though they can't produce much torque with their wrist and they have to do it all with their waist. The other one, the cart-pole system, you've probably seen it in an intro controls course. In our 6003 and our Signals and Systems course at the end of the year, they bring in a cart-pole and do a little demonstration. It's a cart with-- I made it a simple pendulum, since we thought a lot about the simple pendulums, on the cart. You're allowed to push the cart sideways. But there's only a pin joint here holding the pendulum up. So you have to the balance the pendulum with the cart by moving the cart. Now, in our intro controls courses, they do linear control. At the top, they do some simple modeling, some pole placement in 6003. So they start it near the top. It stays near the top. They actually-- they put a wine glass or something on the top. They put a Christmas tree on the top. They do all these things, but they never start it from here because then the nonlinearity kicks in. And that's why we need this course. We've got to-- that's a harder problem. So we're going to do the full Cart-Pole starting today. And we'll finish it on Thursday. So the equations of motion of both of those are quite easy to derive. They take a half a page, and they're in the notes. Both of them are nicely described by the manipulator equations that I introduced in the first lecture. So let's work in the manipulator equation form. So if you remember, I said that most of the systems we care about can be described by these equations. It happens that both of these systems are sort of trivially underactuated. They've got one actuator and one passive joint, so B turns out to be-- well, I've actually got it to be 0 and 1 for the Acrobot and 1, 0 for the cart-pole. But they're both sort of the standard form. So the first thing I think that everybody does with these systems and that we have to discuss is, let's see if we can make it balance at the top. The task in both of these cases is to take the system from some arbitrary initial condition and get it up to the top, and balance. It turns out that, even though the systems are underactuated, you can do that. Just to show you the-- to help your intuition here, this is some Acrobot video from the web. They have a belt drive going down to their elbow motor and a big motor up here, like very big motor up here. [VIDEO PLAYBACK] And if you're willing to sort of pump energy up through the second link, then you can-- - Oh! RUSS TEDRAKE: They were very excited. [LAUGHTER] Then you can add energy to the system, get it to swing around. And then the cool thing is, you can stabilize it at the top, which is actually pretty surprising. So this one took however many pumps, got itself the top, balanced, they said, oh! So in this class, we're going to do better. So this is just a teaser. But here, if you were to take your optimal control tools and try to solve the same problem, the exact same system. Now it's in MATLAB. But we're going to do a lot better by thinking about sort of minimum time or even LQR solutions to get to the top. I think that's pretty elegant. So single pump, and then it's up. And of course, it will depend on your torque limits and all these things, but I think we can do very elegant control on these kind of problems. OK, so I want to start by thinking about balancing at the top. So it's not obvious, if I've got a system with a motor at the elbow and no motor at the shoulder, that I could balance at the top. You'd think that, if I fall with my elbow motor, I've got nothing immediately to correct it. So how the heck do I stabilize that fixed point at the top? Well, it turns out you can. And let's look at that. You've probably seen linearization before. It turns out, if you're linearizing things from the manipulated equations, it's pretty elegant. So I'm going to do that quickly here. So our system that we're working with here is just an x dot. It equals f of u in state-space form. But it happens because it's from the manipulator equation, that if we choose x to be q, q dot, like we always do, then f of x, u has this sort of block matrix, a block vector form here, which is-- q double dot is now H inverse q, Bu minus Cq dot minus G. I just solve that for q double dot. We know that H is-- an inertial matrix, it's always uniformly positive definite. So for all q's, it's positive definite. So I can take its inverse, and I get that solution for q double that. So the derivative of x is-- this is q dot, sorry. It looks like a line, but it's a dot. And now we want to think about linearizing that system around a fixed point. The way we're going to do that, of course, is taking a Taylor expansion. I'm going to take my x dot and say it's approximately equal to f of x at the fixed point, u of x at the fixed point, plus partial f, partial x, with x evaluated at the fixed point, u evaluated at the fixed point, times x minus x star, plus partial f, partial u, evaluated at the fixed point. And it turns out for these equations, at a fixed point, that's really not a hard thing to compute. So can we do that quickly? So what's this term, first of all? AUDIENCE: 0. RUSS TEDRAKE: It's going to be 0, right? If we're at a fixed point, then the derivative at that fixed point better be 0. So this guy just disappears. Partial f, partial x we'll look at real quick. Partial f, partial u turns out to be not too hard. Let's do a partial f, partial u first. It's even easier. So this is a vector, right? So I'm going to end up with, in general, a matrix of here, which contains the terms partial q dot, roughly, partial u, and partial q double dot, partial u. So what's partial q dot, partial u? AUDIENCE: 0. RUSS TEDRAKE: 0, right? Partial q double dot, partial u, well, that turns out to be-- even though there's a lot of matrices flying around here with possibly nonlinear things inside, everything's linear in u. So that's actually pretty easy to write too. That turns out to be H inverse B. And this whole thing is going to be evaluated at our-- u doesn't matter in this case-- but evaluated at our fixed point. OK, what about partial f, partial x? So now x is a bigger thing. So I'm going to have partial q dot, partial q here. This is sort of a block matrix form I'm using here-- partial q dot, partial q dot here, and then partial q double dot, partial q, partial q double dot, partial q dot. What's this? AUDIENCE: 0. RUSS TEDRAKE: 0. What's this one? AUDIENCE: 1. RUSS TEDRAKE: 1, or more generally, I, yeah. q double dot, partial q-- we need to use our chain rule. This is, something depends on q times something else that depends on q. So it's going to be partial H inverse, partial q, times Bu minus Cq dot minus G, plus H inverse, partial of that whole inside. And H is potentially a little messy. H inverse is probably more messy. But it turns out this is going to be very simple, again, for us. So I claim that this term at the fixed point has also got to be 0. Do you buy that? Yep. This thing has got to equal H, q double dot. H is positive definite. So for q double dot to be 0, this had better be 0. So that whole thing goes to 0. So this term-- or we don't have to do partial H inverse, partial q-- great. And this one, again, is actually pretty simple. In the very first lecture, I did use B as potentially a function of q. But for these examples, it certainly isn't. It's just a constant. So this doesn't have any dependence on q. C does depend on q. G does depend on q. But at the fixed point, q dot had better be 0, if it's a fixed point. So the only term that actually survives out of this whole potentially scary thing is-- yeah, good-- is negative H inverse, partial G, partial q. Take the derivative with respect to q dot. Again, this term is 0. So no matter-- H inverse, partial q dot doesn't matter. And then what in here depends on q dot? Well, C depends on q dot, both directly and internally. So we end up with partial-- so H inverse, partial C, partial q dot, times q dot, plus-- I'll do the whole thing as minus here-- plus C. But again, q dot is 0 at the fixed point, so it ends up-- this whole scary thing reduces to H inverse C. AUDIENCE: So G is [INAUDIBLE] of [? theta? ?] RUSS TEDRAKE: That's correct, yep. Those are our gravitational terms, yeah. OK, so this whole potentially scary thing works out to be 0, I, negative H inverse, partial G, partial q, negative H inverse, C. OK? There's a lot of beauty in the manipulator equations. It's a very nice middle ground between sort of any arbitrary nonlinear system, but it's got enough structure that you can play a lot of tricks like this. And often, things simplify. So I actually think it's a beautiful representation that we're lucky to have in robotics. OK, now I've got this form. That's a linear system here, right? I've got-- if I just call this thing A and this thing B, then I've got x equals Ax plus Bu here. And if you prefer to really make it linear, then let's define x bar to be x minus x star, u bar to be u minus u star, to put the origin at the fixed point. Oops. Now, x bar dot is just x dot minus-- this thing's-- that's 0, so it's just x dot. So I could equally write this as Ax bar plus Bu bar. OK, so given the manipulator equations for the Acrobot, for the cart-pole, it's trivial to find a local approximation of those dynamics which is valid around the fixed point we're trying to stabilize. And if I have that, then I could play some of the linear control games that we started with. So the first thing to try, let's do LQR at the top. It turns out LQR at the top just works really well. It's not too surprising, I guess. Actually, before we do LQR, let me take a minute and actually decide if we should-- if it's proper to do LQR. So there's a condition in LQR that had to be met in that derivation that I threw at you. If the infinite horizon cost, that integral wasn't bounded, if your system didn't get to the origin, then the LQR cost would be infinite, and the LQR derivation would break. If your system-- if you cannot, with feedback, drive your system so that x is at the fixed point, then the LQR cost function will accumulate cost forever and blow up. So really, the first thing, before we apply LQR, we better take a second to decide if we think the system can get to the origin with feedback. And that condition is called controllability. And controllability is a very powerful concept. And I want to make sure you understand the relationship between controllability and underactuation. So the question is-- for this system, the question is, if I have my linear system and I started in some initial conditions that are non-0-- otherwise it's not so interesting-- if my initial conditions are non-0, can I design a feedback law, find some actions, u, that will drive my system to 0 in a finite time? So more generally, the definition of controllability says-- let's say, x at 0 equals some initial conditions to x at some final time to be some other initial conditions, given unbounded actions in a finite time. So controllability is actually the thing we care about in life. For nonlinear systems, controllability is actually incredibly hard to evaluate. Most systems tend to be controllable. But it's a hard thing to evaluate for nonlinear systems. For linear systems, we have all the tools we could dream of to evaluate the controllability of the system. So for linear controllability, it's sufficient to say, x, t final equals 0. So a lot of times for linear systems, people just ask controllable-- say it's controllable if I can drive my system from any initial conditions to 0 in a finite time. And because everything's nice and linear, that's actually-- that's equivalent to the stronger definition for linear systems. In nonlinear systems, you have to evaluate every initial condition, every final condition, if you're not careful. So what do you think? If the system is underactuated, then do you think it can be controllable? If I choose some place, some state for the Acrobot, and I choose a finite time, can you design a controller, potentially with really big actions, that gets me there in finite time? AUDIENCE: No. RUSS TEDRAKE: Tell me why. Tell me why you say no. AUDIENCE: Well, it can [INAUDIBLE] you potentially [INAUDIBLE] stabilize for Acrobot. So regardless of the fact, which [INAUDIBLE] you can reach that [? theta. ?] RUSS TEDRAKE: But there's nothing to say I can't leave that state and come back in some very finite, very small amount of time. So you're saying, if I ask you to go here at some small amount of time, and I start right here, then I can't be there in some small amount of time. But what if I put in so much actuation that I go like this and I'm back there? AUDIENCE: [INAUDIBLE] have to stay in the final state? RUSS TEDRAKE: No, it does not have to stay in the final state. AUDIENCE: So underactuated implies that the local dimensionality of the reachable state-space is lower than the full dimension, right? So if you just shrink your finite time to an arbitrarily small time, we won't be able to reach states outside that [INAUDIBLE].. RUSS TEDRAKE: I think that-- so this is exactly the point. So what he said is, he says, the underactuation is-- I'll use the word instantaneous. It's an instantaneous constraint on what you can do. At any incident in time, I can only produce accelerations in a certain direction. So how can I possibly, in some-- if I make my finite time small enough, how can I possibly get there? Well, finite time is actually different than 0 time. And the actions can potentially be huge. But actually, most-- a lot of the underactuated systems are controllable. So we'll see it carefully in the linearization of this. But actually, controllability-- if you get one thing out of this lecture-- and I'm going to write at the end of the lecture again on the board-- there's a difference between controllability and underactuation. A lot of the underactuated systems are, in fact, controllable. And that's what makes-- gives us a few more things to talk about in the class. So let's see if I can make that point to you. So how do we talk about controllability, in a linear system even? So what are the tools people use for controllability in a linear system? If you used them, call them out. Who's used controllability tools? Yeah. AUDIENCE: There's the matrix C times B, C times A, times B, write it all out, then find the rank. Would that be true? RUSS TEDRAKE: Good, there's a controllability matrix. AUDIENCE: There's a controllability Gramian, I think. RUSS TEDRAKE: Awesome, there's a controllability Gramian, which turns out to be almost exactly the-- what came out of our LQR derivation. Those are the two big ones. So there are controllability matrices, controllability Gramians. I'm going to say a minute about it. But if you care about-- we can actually-- both of those are a little bit unintuitive, actually. And the proofs are-- the proof of the Gramian's not bad, but the proof of the-- the derivation of the controllability matrix is a little bit of black magic. So I decided instead to let's do a simpler case where we can actually understand it. But it'll be a less general result. So let's look at the x dot equals Ax plus Bu system. I'm going to make our derivation easier by making an assumption that the eigenvalues of A are all unique. If you're willing to make that assumption, we can see a lot of things. The more general derivations don't have that, but require black magic. OK, so if you remember, our eigenvalue thing, an eigenvalue means that multiplying A times that is just the same as multiplying a scalar times n. If I compose all the-- sorry, these are eigenvectors. That's an eigenvalue. If I compose them all into a matrix form, I can see A times V, where V has all the eigenvectors as columns, is equivalent to-- well, let's write it like V times lambda, where lambda is a diagonal matrix, which has lambda 1, lambda 2, and so on. The cool thing about-- the reason we assume that there's no repeated eigenvalues is that it implies that all of the-- eigenvalues are unique implies that all the eigenvectors are actually unique and that they span the space. And it implies that V inverse exists. As soon as you have repeated eigenvalues, you don't have that simplification. You have to do repeated roots and things. But in that simplification, we can switch to modal coordinates. So let's-- if V is full rank, then I can just change coordinates from x through V inverse to some other coordinate system r. Why is that a good idea? That's a good idea, because then r dot is just going to be-- so V inverse times x dot, which is A times Vr. That's just substituting this into the x dot equals Ax plus Bu. But what's V inverse AV? If you look at this, that's just our diagonal matrix. So on the eigen-- in the modal decomposition, the systems evolve without coupling. I can write-- component-wise, I could say, ri dot, the i-th component is just lambda ir, plus some contributions from this guy. So now I'm looking at my-- in these modal coordinates, I'm saying that the result of applying A-- this is the same thing I wrote before. This is the reason we can make-- in these phase plots, we can find the eigenvectors and just talk about the dynamics on those eigenvectors, same thing. It just says that, on the eigenvectors, the dynamics are just the eigenvalues. And then they've got this impulse, this input from the control actions. So now let's think about what it would mean to be controllable in this sense. What kind of conditions would you want to say that the system is controllable? Yeah. AUDIENCE: We can change r dot using u. RUSS TEDRAKE: You can change r dot to do anything you want using u. You'd like to be able to say, make it act like the eigenvalues were arbitrarily fast, for instance, or arbitrarily unstable, if you chose to do something so silly. So what is required to do that? AUDIENCE: [INAUDIBLE] from the [? beta ?] values. There'd have to be some non-0's [INAUDIBLE].. RUSS TEDRAKE: Excellent. OK, so imagine you only have a single thing you're trying to control and lots of inputs. Then it should be sufficient that, if any one of those betas is non-0, that should be enough. OK, now let's say you have multiple things you're trying to control. You start having to worry about whether you can use u to control eigenvector 1 and eigenvector 2, both at the same time. But it turns out, because we assumed we're in the case of distinct eigenvalues, if you think really hard-- and I will write a little bit more about this-- but it turns out it's still OK to just have one thing that can control you. Because things are converging at different rates, it's actually sufficient to be able to control-- if you can control each of them independently, then you can actually control them all. So the condition turns out to be, for all i, there exists a j, such that beta ij is not equal to 0. Beta ij, again, is my B matrix, but also permutated by this V inverse. So if I was looking at whether the system was underactuated, if B wasn't full rank, I'd be hosed. I'd be underactuated. But this is actually a much less strict condition. I say, I only have to have one of my-- in my eigen modes, I have to be able to control each of them. So that's our first sort of glimpse at how you could imagine an underactuated system being completely controllable. AUDIENCE: Is it the same as saying that, if you have [? any ?] dimension that you want to control, then beta should have at least n different eigenvalues? [INAUDIBLE] RUSS TEDRAKE: So excellent-- so this actually says-- for instance, in the limit, let's take the case where you have 100 degrees of freedom and one actuator. As long as Bij is non-0 for all of those, then it says, with my one actuator, I could control-- I can drive that 100 degree of freedom system to the origin. That's a great question, a great example. So it does not have that same rank condition on B. AUDIENCE: So as long as none of your eigenvectors are in the null space [INAUDIBLE].. RUSS TEDRAKE: Right. It's the-- I have to think about what V inverse is, but I think that's right. I think that's right. He said-- so you don't want B to be in the null space of the inverse. Right. Do I see another question over there? No? OK, so 100 degree of freedom robot, one actuator, there's a chance that you can control it. Now it doesn't say anything about the trajectory you're going to take to get there. It might be really hard. But there's a chance you can control it. There's lots of ways to see this. Let's leave it at that, if people are satisfied with that. Let's not-- I won't do my second derivation here. But let's get straight to the example. So now, that says that, if I take my Ax plus Bu representation of the Acrobot, B is going to be low rank. It's going to be-- so I used B actually twice already today. Did you notice that? Anybody catch me on that? I used B in the manipulate equations, then I used B for the block form. And I swear I've lost sleep trying to figure out if I could put another letter in there, but I'm not happy with any other letters, because they both-- they almost mean the same thing. But I did slip that one past. So this is B in the linear form. So it's not just 0, 1. It's H inverse times 0, 1. So it's going to have some 0's. Sorry, it doesn't-- so it's going to be a vector again. It actually doesn't have to have 0's. And in fact, it probably won't have 0's. I misspoke. This is going to be H1, 1, H1, 2 in the Acrobot case, H inverse 1, 1. The question is-- it's still low rank because it's a column. It's not a full matrix, so it's still underactuated. And can I drive that thing with LQR to the top? We just said that it's possible that these things are still controllable. The tests that you can use, the user's guide to controllability-- there's this funky test which says, if I take a matrix, a controllability matrix, which is the B matrix, AB-- these are just cascaded in columns here, A squared B up to the A to the n, B, where n is the number of states. AUDIENCE: Would it be n minus 1? [INAUDIBLE] RUSS TEDRAKE: n minus 1-- thank you, yeah. Good, yeah. It turns out that if this matrix is still-- is full row rank, then the system is controllable. There's actually-- the derivation of that is in your notes that I'll post. It's not very hard, it's just a matter-- there's all these forms for e to the AT, the matrix exponential. And it involves one of the forms that kind of comes out of nowhere. And so I won't go through the derivation on the board. But it turns out that it's sufficient to check the rank of this matrix. And that'll tell you if the system's controllable. And this is sort of primal enough and important enough that MATLAB has a function for it. So you can call-- I think it's ctrb, right? Is that what it is? And then check the rank of it, and you could check if your systems are controllable. It turns out, if you linearize the dynamics of the Acrobot, you linearize the dynamics of the cart-pole, you pop in the A and B, you get a full rank matrix out. So they're both controllable, even though they're underactuated around the top. AUDIENCE: Would these be in [INAUDIBLE]?? RUSS TEDRAKE: Yes. It's the linear form. Maybe I should call it B prime or something like that. But still, it feels unnatural to use either-- anything else in either case. AUDIENCE: So these matrices are calculated for every single point, or-- RUSS TEDRAKE: So I just do it-- I evaluate these things once at the x star. So it's partial f, partial x, evaluated at x star. AUDIENCE: So when you say something-- a system is controllable, because when we-- RUSS TEDRAKE: All I mean is that-- all I can show with the rank condition check in MATLAB is that the system linearized-- the linear system that approximates the Acrobot linearized around the top is controllable. Mm-hmm. AUDIENCE: So controllability is defined [INAUDIBLE]?? RUSS TEDRAKE: No, controllability is a property of the system. The linearization, the linear system is only a relevant approximation of the Acrobot around a given state. And I'm saying that the linear system-- you can evaluate the linear system anywhere it's controllable. I can go from any initial condition. But it's only a relevant example-- it's only relevant to the Acrobot if it's close to the fixed point. AUDIENCE: So that definition actually is for all the states. RUSS TEDRAKE: That's right. So yes, if I can say this stronger thing for the nonlinear system, for the Acrobot, in order to say that, I would have to say, for any state, this is true. I can't say that. I went to a weaker form by looking just at the linear system. Mm-hmm. That's pretty big. Is that big enough? AUDIENCE: Mm-hmm. RUSS TEDRAKE: Yeah? OK. So let's take our A and B matrices that we got from doing the linearization around the fixed point. And if you like, if it helps, I can sort of show you how that goes. So very literally, I take my manipulator equations, I get an H, C, G, and B. I take the gradient-- the partial G, partial q, I compute that. And then my A is negative inverse partial G, partial q. And then the-- C happens to be 0 for the Acrobot. I evaluated at the top, but that's not a general, so that's why I don't have the negative H inverse C there, and then my inverse H, B. So that gives me an A and a B. And I can write my LQR controller to balance it at the top, by literally getting a and B from the linearized system, calling LQR. I chose-- the LQR syntax is A, B, q, r. So I chose q to be a diagonal matrix, 10, 10, 1, 1, just saying I penalize position errors 10 times more than velocity errors because it happens-- with units, you tend to do that. And r is 1. And it's in a persistent loop just so I don't compute-- I don't call LQR every time I call my function. I only call it every time I start the system. And then my control law is u equals negative k, x minus x desired. Now, you're going to do sort of the similar thing to what I'm doing for the Acrobot you're going to do for the cart-pole for your problem set. I will bet that half of you will forget to subtract out x desired at least once. I know I do all the time. So remember the minus x desired. If I put that at the top, if I put initial conditions near the top, and I run it, look what happens. Oops. OK, I'm going to start it-- every time it flashes it's going to be new initial condition. It actually goes pretty far from the top and gets back. Wow, that's a good one. If I-- I'm just choosing Gaussian random variables at the top. So if we watch long enough, we might see one fail catastrophically. So the LQR for the linear system will stabilize it from any point because the linear system is a bad approximation. For the nonlinear, if I started it way down at the bottom it's just going to go nuts. Wow, that's pretty good. Now, you might notice that my second link is pretty big compared to my first link. That helps. And you'll see why in a few minutes. But it's pretty good. AUDIENCE: Does it have unbounded torque? RUSS TEDRAKE: It does have unbounded torque. Whoa. [LAUGHTER] That's pretty good. It does have unbounded torque, yep. If I were to saturate the torque, it probably wouldn't do as well. Now, the cool thing is-- I can stop that. Stop. These MATLAB timers don't like to stop. So I get these huge excursions from the upright in my balancing. But it's not actually because I started with crazy initial conditions. If you noticed, it wasn't like the plot was going, oh, and starting over here and then coming up. It was actually going like this. So if you look at a time trajectory-- let's see if I got lucky. OK, almost all of them are like this. Are those lines dark enough to see? Yeah? The initial conditions are actually pretty small. But in order to stabilize the system, it actually goes way away from the fixed point and comes way back, big time. This is like-- this is the velocity of 18 radians per second or something. And then it finds its way back. So for that reason, you might easily sort of, in LQR, say, OK, my linearization is good here, so any initial conditions here should work. But that's not actually true because your LQR might easily drive you further away before it comes back. If you were to-- if you're a linear controls guy, and you were to do-- this is a multi-input, multi-output system. But if you try to find the poles and 0's, that's what's this going to-- where are the 0's-- is this going to be? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yeah, there's three 0's in the right f plane for the Acrobot actually. It's a nonminimum phase system. The cart-pole has one 0 in the right f plane. It's not a general property of underactuated systems. But sometimes, in order to do this with less actuators than you might like, you have to do crazy things to get back to where you want to go. So I can get-- and I could-- if I tightened my time limit, the things I would do would be even crazier. But I could still do it. It's actually-- just to say it, without really teaching it, but if you did care about sort of the basin of attraction of these systems, if you wanted to do as well as you possibly could with a linear controller on the nonlinear system, you probably wouldn't do what we just did. There's better tools from robust control, which would allow you to sort of design a linear controller but explicitly reason about how nonlinear the thing gets when you're away. And you can design a linear controller that has a bigger basin of attraction than if you just don't reason about the nonlinearities at all. So you can put a bound on how nonlinear things are and do a robust control synthesis, and get, in some sense, a better controller. You would have lower performance, potentially, but it would work better on the nonlinear system, bigger basin of attraction. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Oh, my fixed point was pi 0, 0, 0. Or-- yeah. So those are all going to 0, that one's going to pi, as they should. Yep. OK, excellent. So LQR works, right? And it works-- this is actually sort of a problem, in my mind. Because that works and it works so well And so many different systems, that's why people haven't thought about nonlinear control enough, in my mind. It's sort of unfortunate that that works so darn well, because sometimes that's all people do. So let's think about if we're a little bit further from the fixed point, just to-- maybe I should even make the point-- in other words, you'll probably ask me if I don't do this, because you always test me on the limits of where my things work here. So what if I were to start it far away from the fixed point? Let's be more dramatic. And let's just do it once. Oh, see, that's bad. When it takes that long, that means the integrator's choking because it can't simulate things right. So it's just complete nonsense if it's too far from the linearization point. So what if we want to do control away from that fixed point? Then nothing I just said helps if I'm too far away from that fixed point. So what do we do? Well, you don't have to throw out linearization completely. We talked about, in the first lecture, how the underactuated systems are the systems that are not feedback linearizable. That's what distinguishes them. They're not feedback linearizable. I can't just turn the nonlinear system into a linear system. But they are partial feedback linearizable. So if you want to stick to your guns and do feedback linearization, you can do half the work. So it actually is pretty elegant how that works out. OK, to keep things fresh, let's do it on the cart-pole instead of the Acrobot. We've been talking about the Acrobot a lot. But I promise I wasn't going to do the derivation of the equations of motion, and I won't. But it turns out the result of the cart-pole is simple enough I can write it real quick. The equations of motion for the cart-pole are mc plus mp. And these are in your notes. You don't have to write these down. I want you to see where the next line comes from. OK, well, that's a reasonable thing we might get out of the Lagrange equations. I've got a force on my cart. I've got a 0 in the other equation. You can see how this could be easily separated into the manipulator equations. But since I'm going to be manipulating some of these things, let me just sort of arbitrarily set all the parameters to 1. Let's just-- so-- it's easy to repeat these for the real equations. But let's just do this. And I'll get a 2x double dot plus theta double dot, c, which is cosine theta-- c is enough-- minus theta dot squared s equals f. OK, and then x double dot c, plus theta double dot, plus x equals 0. Now, those I can work with. AUDIENCE: It says G [INAUDIBLE]. RUSS TEDRAKE: Yeah, that's just to save my handwriting. It's not to be physically accurate. It doesn't change the structure of the equations. If you like, you could set-- I bet I could come up with a parameterization which would keep G at around 10 and do OK, but-- AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: There you go, there you go. OK, so given these equations, can I make x double dot and theta double dot do whatever I want? So no. We said that the feedback linearization trick required that B inverse, in general. And so we couldn't do that. But it turns out I can do something. So what are these equations? This is a cart moving around with a pendulum on it. And I'm pushing the cart, and the pendulum's dangling away. So what would feedback linearization-- what would a partial feedback linearization mean? Well, if I know the dynamics of the pendulum and I know the state of the pendulum, and I can control the force on my cart, then it's pretty reasonable to think that I could-- whatever force the pendulum was applying to push my cart around, I could just exactly cancel that out. So I could turn my cart dynamics to sort of just do whatever I want and just cancel out the effect that that pendulum's adding to me. That seems reasonable. And that's called a collocated partial feedback linearization. I'm going to write PFL from now on. It's collocated because the state I'm trying to linearize is the same one where the actuator is sitting. It's collocated. The state I'm linearizing is collocated with my actuators. So my goal is to make x double dot have the dynamics, whatever dynamics I choose-- x double dot desired, let's say. So let's see if we can do this by manipulating the equations a little bit. OK, so I can figure out how x double dot and theta double dot are related with that second equation. So let's get rid of theta double dot. I can see theta double dot had better be negative x double dot c minus s. And if I insert that into the first equation, then I get 2x double dot minus x double dot, c minus s. Oops. Minus sc minus theta dot squared s equals f. That means, if I apply the control law, 2 minus c squared, x double dot desired, minus sc minus theta dot desired s, that implies that x double dot equals x double dot desired, like we wanted. And theta double dot ends up doing something coupled. But that's sort of the resulting dynamics. I didn't actually plan for that. That's just whatever I got out. Does that make sense? That's just me saying-- if you think about the controller here, that's just taking the terms that Are going to be contributed to the dynamics by the pendulum and canceling them out by applying exactly the opposite forces. And the result is that my x moves exactly however I want it to do. So feedback linearization isn't dead if you have an underactuated system. But you can't feedback linearize the whole system. You can do this collocated feedback linearization. Now, the cooler thing is, you can actually, often, also feedback linearize the passive joint. So it's pretty logical that I could move the cart in such a way that I can't allow the pendulum dynamics. It's a little less intuitive that I can make the pendulum dynamics do whatever I want with the cart. But you can, most of the time. OK, so non-collocated means I'm going to use one of my actuators to control-- feedback linearize one of my passive joints. So now let's see if we can make theta double dot do whenever we-- bend to our will. It turns out the manipulation is almost exactly the same. Algebraically, it's not surprising that I can do either one. I've got my equations of motion here. I've got-- both x double dot and theta double depend on my force. And they're coupled here. So sure, I can control either one of them. Physically, it's a little less intuitive. But algebraically, it's just as obvious. OK, so let's just do the opposite one. So x double dot had better be theta double dot, plus s over c, that whole thing negative, based on that second equation. And so I get 2 over c, theta double dot plus s, negative on there, plus theta double dot c, minus theta dot squared c-- oops, s-- sorry-- equals f. Yeah, so sure, so if I apply f equals is the controller-- what's the best way to write this? c minus 2 over c, theta double dot, desired, minus theta dot squared s. Is that right? AUDIENCE: Minus 2 SOC from the first [INAUDIBLE].. RUSS TEDRAKE: Good, yes. Minus 2 tan theta. Good, thank you. Cool. So that's a much less intuitive result, I think. But it's a much more powerful one. It says, if I wanted to directly control the pendulum, I can. I have to give up something. x double dot, then, is going to end up being-- the resulting motion of the cart could be a little strange. It's going to be whatever this theta double dot desired plus s over c looks like. But who cares? If I'm just trying to keep the cart up, that's OK. In your 6003 type demos, they also worry about not running into the rails, which is important. But to first order, this is a good thing. What did I gloss over? AUDIENCE: Theta goes to pi over 2. RUSS TEDRAKE: Yeah, right? So I put a cosine on the bottom here without being careful about that. So what is that? What is that physically related to? AUDIENCE: If your pendulum goes flat, then [INAUDIBLE].. RUSS TEDRAKE: Exactly. When my pendulum is directly sideways, then suddenly, nothing I do with the cart is going to control the accelerations of that pendulum. So instantaneously, you lose the ability to control that. If you're going to swing up from the bottom to the top, then you go through that. So who cares? So stop doing control for a second, and then you'll get back to a place where you can control it again. So that sort of says everything, I think, that your intuition should relate about these things. And then for the Acrobot, it's sort of similarly surprising, but I can use my elbow torque to feedback linearize my shoulder torque-- same thing. So if I wanted to make it do whatever I want, really, then I can do that. I might have to spin like crazy or do something nuts. But I can make my passive joint do whatever I want. It's kind of cool. In fact, just to-- I want to show the slightly more general derivation of that or form of that, but just to make the point. So one of the ways we've been playing with Little Dog-- this is our robotic dog. So Little Dog has actuators at all the internal joints. It's got-- if you're just looking at it from the side, all you care about, it's got one in the knee, it's got, actually, two in the hip-- but that doesn't matter here-- two in the other hip, one in the knee. But the thing we're about to make it do is try to control the dynamics around the foot, which is just like the Acrobot. It doesn't-- the place where you might think you'd want it the most is where you don't have the actuator. So if you want to do something like this with your dog, then you've got a reason about the coupling, the inertial coupling, which is what we did here, that allows you to decide-- the slipping at the end is ugly and we didn't do that right. But until the impact, we did a pretty good job, actually, of regulating the position of the dog, just using the controlled actuators and where our most essential variable was passive. It took a slightly more general form, which I'm going to show you on Thursday, to do that. But partial feedback linearization is sort of alive and well and useful in robotics. Let me just-- I'll show you half of the-- or one of the slightly more general derivations of it, just because it's so easy with this algebra manipulation. So let's say I have manipulator equations of the form-- I'm just going to lump C, q dot, and G into a single term because they don't affect the derivation at all. And let's say that I've stacked-- I've reordered my equations of motion so that all of my passive joints are on top and all my actuated joints are on the bottom. So these are all matrices. Let me call q1, the collection-- it's a vector of all the passive joints, and q2 all the active joints. Then if I break this up just a little bit, I can write the same equation as-- as that. I left these up the whole lecture thinking I was going to point to them all the time-- never did. I'm going to erase them now. OK, so what is my collocated form? Collocated means I'm going to try to control q2. So let's solve for-- q1 is going to be H1, 1, inverse, H1, 2, q2 double dot plus phi 1. Am I allowed to do that, H1, 1, inverse? It takes a little bit of thinking, but it's actually OK. It's a positive definite matrix H. So it turns out the square diagonal entries are actually also have to be positive definite. So maybe take my word on that. And then I can plug that in and see that, if I do tau-- let's see if I can just do it in the step here. Tau is going to have to be H2, 2, plus-- no, I missed a minus, didn't I? q1 is-- I missed a minus in here. Minus H1, 1, H1, 2-- I did that. I should have done it in two steps. It's H2, 1, H1, 1, inverse, H1, 2, q2 double dot desired, plus phi 2. Right? In the non-collocated, we're going to have to solve for q2 double dot. That's going to be H1, 2, negative H1, 2, inverse, times H1, 1, q1 double dot, plus theta 1. I missed my theta 1 term in here somewhere. That should have been also in here. Yeah. OK, so the first step in the non-collocated, in general, is this H1, 2 inverse. Is that one OK? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Not necessarily. I could have had a different number of actuated active and passive degrees of freedom. So let me write something that's a little bit better, the sort of pseudo inverse. And what matters-- the non-collocation, the partial feedback linearization is going to work if and only if that matrix is full row rank again. So what does that mean? It means I can't use one active degree of freedom to control two or more passive degrees of freedom. I need to have at least as many actuators as degrees of freedom I'm going to try to control. That's reasonable. But it's a little bit more than that still. I actually have to have them inertially coupled in the right way. So if I had two pendula on the table-- well, the table might have some dynamics. If I had two pendula bolted to completely independent bases, and I had an actuator on one and not an actuator on the other, that ain't gonna work. I can't make any math that's going to make that work. So there has to be some inertial coupling between the two. So the rank condition on this is the condition of-- is sometimes called the condition of strong inertial coupling. The strong means that it's uniformly inertially coupled. It's just inertially coupled in some state. And if for all q this thing is full rank, then it's strong inertial coupling. And there's even a more general form. So in general, you can-- and what we do in Little Dog is, we pick some combination of actuated and unactuated degrees of freedom. And actually, what we care about is a virtual degree of freedom, which is the center of mass. And I'll show you at the beginning of the next lecture the most general form of this. But I can't put up these PFL equations without spending one minute at the end saying, PFL is still sort of bad, right? I don't really-- it works, and I use it because I want to control these robots. But I don't like feedback linearization. Feedback linearization is bad. This is taking some beautiful nonlinear system that has beautiful equations and arbitrarily pumping in some potentially large amount of energy to squish those dynamics and bend them to your will. And that's good. That's the feedback way. But it's not the only way. So we do it when we have to. But it's better if you don't. Yeah. AUDIENCE: Wouldn't you have [INAUDIBLE] errors in the [INAUDIBLE]? RUSS TEDRAKE: It could be that, if you don't have the model perfect, that you could be doing bad things too. Any time you have large control gains going around, you're going to be sensitive. So don't always do this. But I do want you to know that you can do this if you so choose to take that path. OK, cool, so on Thursday we will use these partial feedback linearizations and LQR together to make the Acrobot, the cart-pole swing up and balance. If all goes well and we don't have any License Manager issues, then we'll see it in class. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_21_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: Welcome back. So today we get to finish our discussion on at least the first wave of value-based methods for trying to find optimal control policies, without a model. So we started last time talking about these model-free methods. And just to make sure we're all synced up here, so big picture is that we're trying to learn an optimal policy, approximate optimal policy, without a model, by just learning an approximate value function. And the claim is that value functions are a good thing to learn for a couple of reasons. First of all, they should describe everything you need to know about the optimal control. Second of all, they're actually fairly compact. I'm going to say more about that in a minute. But if you think about it, probably a value function might actually be a simpler thing to represent than the policy, a smaller thing to represent it, because it's just a scalar value over all states. And the third big motivation I tried to give last time was that these temporal difference methods which bootstrap based on previous experience, they're, like value iteration and dynamic programming, can be very efficient in terms of reusing the computation or reusing the samples that you've gotten by using estimates that you've already made with your value function to make better, fast estimates of your value function as you [INAUDIBLE]. These are all going to come up again. But that's just the high-level motivation for why we care about trying to learn value functions. And then first thing we did-- it was really all we did last time. The first thing we had to achieve was just estimate a value function for a fixed policy, which we called J pi, right? And we did it just from sample trajectories. In discrete state and action, we called them s and a. Take a bunch of trajectories, and you would be able from those trajectories to try to back out J pi. And those trajectories are generated using policy pi. So I actually tried to argue that that was useful even in the-- so if you just want-- if you have a robot out there that's already executing a policy, or a passive walker or something like this that doesn't have a policy, and you just want to see how well it's doing, estimate its stability by example, then you can actually-- this might be enough for you. You might just try to evaluate how well that policy is doing. We call that policy evaluation. What we're actually interested it's not that. That's just the first step. What we care about now is, given if we can estimate the value for a stationary policy, can we now do something smarter and more involved, and try to estimate what the optimal value function. Or you might think of it as continuing to estimate the value function as we change pi towards the [INAUDIBLE] the optimal cost, the optimal policy. We talked about a couple of ways to estimate the value function for a fixed pi, right? Even for function approximation, we did first Markov chains, and then we went to function approximation. And we have convergence results for linear function approximators. And we went back and looked up [INAUDIBLE] it had a question about whether they used lambda in your update, if it always got to the same estimate of J. And I think the answer was, yes, it always gets-- the convergence proof has an error bound. And that error bound does depend on lambda, if you remember [INAUDIBLE] discussion. But if you said your learning rate gets smaller and smaller and you go, it should converge to the-- they should all converge to the same estimate of J pi. So if you think about it, learning J pi shouldn't involve any new machinery, right? If I'm just experiencing cost, and I'm experiencing states, and I'm trying to learn a function of cost-to-go given states, that should just be able to do a least squares function. It's just a standard function approximation task. I could just do just a least squares function approximation, least squares estimation, and what we call the Monte-Carlo error. Just run a bunch of trials, figure out the estimates of what the cost-to-go was at every time, and then just do least squares estimation. The machinery we developed last time was because it's actually a lot faster using bootstrapping algorithms. [INAUDIBLE] much faster than [INAUDIBLE].. Right. So we talked about the TD lambda algorithm, including for function approximation. The only reason we had to develop any new machinery is because we wanted to be able to essentially do the least squares estimation, but we wanted to reuse our current estimate as we build up the estimate. And that's why it's not just a standard function approximation task we did all the [INAUDIBLE] something [INAUDIBLE] presented it. OK. So that's the simple policy evaluation story. Now the question is, how do we use the ability to do policy evaluation to get towards a more optimal policy? So today, given the new policy evaluation, we want to improve the policy. And the idea of this-- the first idea you have to have in your head, very, very simple. And it's called policy iteration. So given I start off with some initial guess for a policy, and I run it for a little while, I could do policy evaluation. So I'm converged on a nice estimate to get J pi 1. And now I'd like to take J pi 1, my estimate, and come up with a new pi 2. We've talked about how the value function infers a policy. And if I repeat, and I do it properly, then if all goes well, I should find myself-- if it's always increasing in performance, and we can show that, then I should find myself eventually at the optimal policy and optimal value function, right? So we said TD lambda was a candidate for sitting there and evaluating policy, which I've talked about a couple of different ways to do policy evaluation. So the question now is, how do we this, then? That's the first question. So given your policy, given your value function, how do you compute a new policy that's at least as good as your own policy but maybe better? AUDIENCE: Maybe stochastic gradient descent? RUSS TEDRAKE: Do something like stochastic gradient descent? You have to be careful with stochastic gradient. You have to make sure it's always going down, and things like that. It's a good idea. In fact, that's sort of-- actually, [INAUDIBLE]. We combine stochastic gradient descent and evaluation to do actor-critic [INAUDIBLE]. But there's a simpler sort of idea. I guess the thing it requires-- I didn't even think about this when I was making the notes. But I guess it requires an observation that-- so the optimal value function and the optimal policy have a property that the policy is going, taking the fastest descent down the value function. Your job is to go down the value function as fast as possible. But if you're not optimal yet, I've got some random policy, and I figure out my value of executing that policy, that's actually not true yet. So what I need to say is, if you start giving your value function, you come up with a new policy which tries to be as aggressive as possible on this value function, which in our continuous sense, is going down the gradient of the value function as fast as possible. And that should be at least as good-- in the case of the optimal policy, it should be the same. It should return the optimal policy again. But in the case where the value estimates from another, original policy gets you to do better. So the basic story-- that's the continuous gradient-- is you want to come up with a greedy policy that moves down, that does the best it can with this J pi. So pi 2, let's say, which is a function of s, should be, for instance, [INAUDIBLE] the discrete sense here, discrete state and action, minimize the expected value [INAUDIBLE] expected value first, by one-step error plus-- So I've got the cost that I incur here plus the long-term cost here. I want to pick the new min over a. The best thing I can do given that estimate of the value function. And that's going to give me a new policy, actually, pi 2, which is greedy with respect to this estimate of the value function. What does that look like to you guys? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yeah. OK. So value iteration, or dynamic programming, is exactly policy iteration in the case where you do a sweep through your entire state space every time, and then you update, sweep your entire state space, you do the update. Absolutely. But it's a more general idea than just value iteration. You don't have to actually evaluate all s. You might call it asynchronous value-- [INAUDIBLE]? AUDIENCE: Shouldn't that be argmin [INAUDIBLE]?? RUSS TEDRAKE: Oh, good. Thank you, yeah. This is argmin. Good catch. Yeah. AUDIENCE: This is like g [INAUDIBLE].. RUSS TEDRAKE: Right. I always minimize this. So g is [? bad. ?] Well, I don't promise that I will never make a mistake with the signs, because I try to use reinforcement [INAUDIBLE] notation with costs, and I can sometimes get myself into trouble. I never write "arg." It's always g. OK. So so this would be argmin. The min is the value estimated in the case of value iteration. But in general, you don't have to wait till you sweep the entire state space. You can just take a single trajectory through, update your value J. Or you take lots of trajectories through [INAUDIBLE] get an improved estimate for J, and then do this and get a new policy, right? In this policy iteration, the original idea is you should really do this policy evaluation step until your estimate of J pi convergence, and then move on. But in fact, value iteration and other-- many algorithms show that you can-- it actually is still stable when you don't wait for it to converge. But there's a problem with what I wrote here. I don't think there's a technical problem. But why is that not quite what we need for today's lecture? Yeah. AUDIENCE: I just had a quick question. So if you're going to be gradient with respect to the value function that you evaluate, you can't do that with a value function if you have a model, right? So you need a-- RUSS TEDRAKE: That's actually exactly-- you're answering the question that I was asking. That's perfect. So from, as I said, model-free, model-free, model-free, but then I wrote down a model here. So how can I-- even in the steepest descent sort of continuous sense, this is absurd. In the discrete sense, argmin over a is typically done with a search over all actions in the continuous state and action. I think it was finding the gradient down the slope. But right. Both of those require a model to actually do that policy [INAUDIBLE]. So the first question for today is, how do we come up with a gradient policy, basically, without any model? [INAUDIBLE] going to say it. [INAUDIBLE] know this, but that's the-- what do you think? [INAUDIBLE] haven't read all the [INAUDIBLE] algorithms. What do you think? What's the-- how could I possibly come up with a new policy without having a model? AUDIENCE: [INAUDIBLE] s n plus [INAUDIBLE] sample directly? RUSS TEDRAKE: Good. You could sample. You can start to do some local search to come up with [INAUDIBLE]. Turns out-- I mean, I didn't actually ask the question in a way that anybody would have answered it in the way I wanted, so. So it turns out if we changed the thing we store just a little bit, then it turns out to contribute to do model-free greedy policy. OK. So the way we do that is a Q function. We need to find a Q function. It's a lot like a value function. But now it's a function of state and action. And we'll say this is still [INAUDIBLE] this way. OK. So what's a Q function? A Q function is the cost you should expect to take, to incur, given you're in a current state and you take a particular action. So it's a lot like a value function. But now you're actually learning a function over both state and actions. So in any state, Q pi is the cost I should expect to incur given I take action a for one step and I follow a policy pi for the rest of the time. That make sense? So I could have my acrobot controller or something like this. And in a current state, I've got a policy that mostly gets me up, but I'm learning more than just what that policy would do from this state. I'm learning what that policy would have done if I had for one step executed any random action on the function, for any random action. And then what would I do from the-- beginning I ran that controller for the rest of it. Algebraically, it's going to make a lot of sense why we would store this. But it's actually interesting to think a little bit about what that Q function should look like. And if you have a Q function, you certainly could also get the value function, because you can look up for a given pi what action that policy would have taken. You can always pull out your current value function from Q. But you can also-- [INAUDIBLE] simple relationship here in the [INAUDIBLE].. And for the optimal [INAUDIBLE],, I should actually do that search over a. I almost wrote minus. That can be your job for the day, make sure I don't flip any signs. OK. We're roboticists in this room. What does it mean to learn a Q function? What are the implications of learning a Q function? Well, I guess I didn't say. So given the Q function pi [INAUDIBLE] having a Q function makes action collection easy. Pi 2 of s is now just a min over a Q pi s and a, where Q pi was [INAUDIBLE] with pi 1. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: It's argmin [INAUDIBLE].. But I was willing to-- the reason to learn a Q function in one case here is that it tells me about the other actions I could have taken. And if I want to now improve my policy, then I'll just look at my Q function. At every state I'm in, instead of taking the one at pi a, I'll go ahead and take the best one. If pi 1 was optimal, that I would just get back the same policy. But if pi 1 wasn't optimal, then I'll get back something better, given my current estimate of Q. OK. But what does it mean to run Q? And this is actually all you need to learn to do model-free value [INAUDIBLE] optimal policy. That's actually really big. So it's a little bit more to learn than learning a value function. And you're learning about your [INAUDIBLE].. If I had to learn J pi, how big is that? If I'm going to say I've got n dimensional states and m dimensional u-- I'll just think about these two new cases, even though [INAUDIBLE] this. If I have to learn J pi, how big is that? What's that function mapping for? AUDIENCE: [INAUDIBLE] scalar learning. RUSS TEDRAKE: Learning a scalar function over the state space to R1, just learning a scalar function. If I was learning a policy, how big would that be? If I was learning a stationary policy, it might be that. So how bad is it to learn Q? What's Q? AUDIENCE: [INAUDIBLE] asymptote [INAUDIBLE].. RUSS TEDRAKE: Let's keep it a deterministic policy for now. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yeah. Now I've suddenly got to learn something over-- sorry, [INAUDIBLE] here. AUDIENCE: [INAUDIBLE]. Yeah, there. RUSS TEDRAKE: OK. And for [INAUDIBLE],, what would it be used to learn a modeled system? If I wanted to use this idea. What's that model? [INTERPOSING VOICES] RUSS TEDRAKE: Yeah. So f and then n plus m to Rn. So let's just think about how much you have to learn. So the easiness of learning this is not only related to the size. But it does matter. So most of the time, as control guys, as robotics guys we would probably try to learn a model first, and then do model-based control. The last few days I've been saying, let's try to do some things without learning a model. Here's one interesting reason why. It's actually-- learning a model is sort of a tall order. It's a lot to learn, right? You've got to learn from every possible state and action what's my x dot [INAUDIBLE]. This is only learning from every possible state and action. What's the expected cost-to-go [INAUDIBLE]?? [INAUDIBLE] a scalar. So this is learning one algorithm for all m. And the beautiful thing about optimal control, with this sort of additive cost functions and everything like that, the beautiful thing is that this is all you need to know to make optimal decisions. You don't need to know your model. That model is extra information. All you need to know to make optimal decisions, given these additive cost functions [INAUDIBLE] is given [INAUDIBLE] a state and then a given action, how much do I expect to incur cost [INAUDIBLE]?? It's a beautiful thing. So if we make it stochastic, it gets even sort of-- learning a stochastic model, if your dynamics are variable and that's important, you want to do stochastic optimal control. Learning a stochastic model is probably even harder than that. Maybe I have to learn the mean of x dot plus the covariance matrix of x dot or something like this. When I use a stochastic model, it would be even more expensive. Q, in the Q sense-- I left it off in the first pass just to keep it clean, but Q is just going to be the expected value around this. So Q is always going to be a scalar, even in the stochastic optimal control sense. So maybe this is the biggest point of optimal control, honestly-- optimal control related to learning-- is that if you're willing to do these additive expected value optimization problems, which I think you've seen lots of interesting problems that fall into that category, then all you need to know to make decisions is to be able to-- the value function, the Q function here. The expected value of future penalties. And for everything else, [INAUDIBLE].. Important point. Now, just to soften it a little bit, in practice, you might not get away with only that. If you have to somehow build an observer to do state estimation or to estimate Q, and you've got-- there might be other reasons floating around in your robot that might require you to learn this. But in a pure sense, that's really what you need to know. AUDIENCE: Hey, Russ? RUSS TEDRAKE: Yeah. AUDIENCE: You put x and u. Shouldn't that be s and-- RUSS TEDRAKE: Right. We could have-- I could have said n is the number of states. AUDIENCE: I just meant, should it be s and a? RUSS TEDRAKE: Right. I would have-- I wrote the dimension of x, and I called it Rn,m. So that's what I meant. If you want to make an analogy back here, then it would actually be just the number of elements in s and a. But I wanted to sort of be a roboticist [INAUDIBLE] for a little bit. AUDIENCE: OK. RUSS TEDRAKE: This is just the computer scientist that did this [INAUDIBLE]. But it does make this easier, so I still [INAUDIBLE].. So I meant to do that. So that [INAUDIBLE]. OK. So now, how do we learn Q? I told you how to learn J. Q looks pretty close to J. How do I learn Q? I told you about temporal different learning, probably wouldn't have wasted your time talking about temporal difference learning if it wasn't also relevant for what we needed to do these model-free value methods. So let's just see that temporal difference learning also works for learning these functions. OK? That's just some [INAUDIBLE]. Let's do just a simple case first, where I'm just doing-- remember, TD0 was just bootstrapping. It wasn't carrying around long-term rewards. It was just saying [INAUDIBLE] one step, and then I'm going to use my value estimate for the rest of the time as my new update. And I'll go ahead, since we're-- we talked about last time how a function approximator [INAUDIBLE] reduce it to the Markov chain case, let's just do it like-- let's say [INAUDIBLE] of s is alpha i phi i, linear function approximators. Or we could, in fact, reduce alpha t phi s, a. OK. Then the TD lambda update it going to be-- TD0 update is just going to be alpha plus gamma call that hat just to be careful here-- pi s transpose. These really are supposed to be s n and a n. I get a lot of [INAUDIBLE] for my sloppy [INAUDIBLE].. OK. And Q pi-- or the gradient here in the linear function approximator case, is just phi s, a. So if you look back in your notes, that's exactly what we had before, where this used to be J. We're going to use is our new update-- we're going to say that our new estimate for J is basically the one-step cost plus the long-term look-ahead. a n plus 1 in the case of doing an on-policy-- if I'm just trying to do policy evaluation, it's going to be pi s n plus 1. We'll use that one-step prediction minus my current prediction and try to make that go to 0. And in order to do it in a function approximator sense, that means multiplying that error, the temporal difference error, by the gradient. And that was something like gradient descent on your temporal difference policy. But not exactly, because it's this whole recursive dependence thing. People get why-- do people get that it's not quite gradient descent but kind of this? This looks a lot like what I would get if I was trying to do gradient descent [INAUDIBLE],, right? But only in the case of TD1 was actually gradient descent. But normally if I have a y minus f of x, I'm trying to do the gradient with respect to this, I've got to minimize this. And I'll get something-- if I take the gradient with respect to alpha, I get the error alpha x [INAUDIBLE].. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Because what we got here, this is our error. If we assume that this is just my desired and this is my actual, then this is gradient descent. But it's not quite that, because this depends on an alpha-- these all depend on alpha. So by virtue of having this one in the alpha, it's not exactly gradient descent algorithm. But it still works. People proved that it works. Is that OK? And actually, in the case where TD is one, these things actually go through it and cancel each other out with whatever is a gradient descent algorithm [INAUDIBLE]. But I want you to see, this is my error I'm trying to make Q and my current state and action look like my one-step cost plus Q of my next state and action. And I would do that by multiplying my error by my gradient in a gradient descent kind of idea. OK. You can still do TD lambda if you like also. Q functions And the big idea there was to use an eligibility trace, which in the function approximator case, was gamma lambda ei n plus [INAUDIBLE].. And then my update is the same thing-- alpha-- because this is my big temporal difference error. And instead of multiplying by the gradient [INAUDIBLE] this eligibility trace. And magically through an algebraic trick, remembering the gradient computes the bootstrapping case when lambda is 0, and the Monte Carlo case when lambda is 1, and something in between when lambda is [INAUDIBLE].. OK. So you'd still do temporal difference there. Big point number two-- big idea number one is we have to use Q functions to do action selection. Big point number two is off-policy policy evaluation. Once we start using Q, you could do this trick that I mentioned first thing we're doing value methods. And that is to execute policy pi 1 but learn Q pi 2 [INAUDIBLE].. Can you see how we do that? By virtue of having this extra dimension, we know we're learning-- bless you-- not only what happens when I take policy pi from state s. I'm learning what happens when I take any action in state s. That gives me a lot more power. Because for instance, when I'm making my temporal difference error, I don't need to necessarily use my one-step prediction as the current policy. I can just look up what would policy 2 [INAUDIBLE].. Because I'm storing every state-action pair, it's more to learn, more work. But it means I can say, I'd like my new Q pi 2 to be the one-step policy I got from taking a plus the long-term cost of taking policy pi 2. And then all the same equations play out, and you get-- you get an estimate for policy 2. AUDIENCE: Does it count-- RUSS TEDRAKE: Yeah? AUDIENCE: Does it count more than the first step of the policy 2 and then take your cost-to-go of policy 1? Or does it somehow-- RUSS TEDRAKE: So I can't switch-- we'll talk about whether you can switch halfway through. But once I commit to learning Q pi 2, then actually this whole thing is built up of experience of executing policy 2, even though I've only generated sample paths for policy 1. So it's a completely consistent estimator of Q pi 2, right? If I halfway through decided I wanted to start evaluating pi 3, then I'm going to have to wait for those cancel out, or we play some tricks to do that. But it actually recursively builds up in the estimator of pi 2. AUDIENCE: Can I ask a question? RUSS TEDRAKE: Of course. AUDIENCE: [INAUDIBLE] have that [INAUDIBLE] function like this, we can substitute y-- RUSS TEDRAKE: You're talking about this? AUDIENCE: Yes. You can substitute y by [? g or ?] gamma, and then execute the-- RUSS TEDRAKE: Yeah. AUDIENCE: And then take the derivative? RUSS TEDRAKE: Yes. AUDIENCE: Why [INAUDIBLE]? RUSS TEDRAKE: So why isn't it true gradient descent? That's exactly what I proposed to do. But the only problem is, this isn't what we have. What we actually have is this, which means that this is not the true gradient [INAUDIBLE] term for partial y partial from over here. AUDIENCE: That's what I'm suggesting. So instead of y alpha, we can actually [INAUDIBLE] g plus gamma-- an approximation of y. So this g plus gamma Q is [INAUDIBLE] approximation for y, right? RUSS TEDRAKE: I'm trying to perfectly make the analogy that this looks like that, and this looks like that. AUDIENCE: Right. But when we're taking the derivative from that function, we assume that y is constant. RUSS TEDRAKE: Yes. AUDIENCE: And then solve this. RUSS TEDRAKE: Yes. AUDIENCE: We can actually assume that y is dependent on alpha and the derivative of that term with respect to alpha as well, and then solve it. RUSS TEDRAKE: Yes. So you could do that. So you're saying why don't we actually have a different update which has the gradient [INAUDIBLE]?? OK, good. So in the case of TD0-- TD1, you actually do have that. And I think that's true. I worked this out a number of years ago. But I think it's true that if you start including that, if you look at the sum over a chain, for this standard update with TD0, for instance, that those terms, this term now will actually cancel itself out on this term here, for instance. It doesn't work. It doesn't work nicely. It would give you-- it gives you back the Monte Carlo error. It doesn't do temporal difference. It doesn't do the bootstrapping. So basically, you start including that, then you do get a least squares algorithm, of course. But it's effectively doing Monte Carlo. You have to sort of ignore that do to temporal difference learning. You're actually saying, I'm going to believe this estimate in order to do that. OK? Temporal difference, if you actually want to prove any of these things, I have one example of it in the note. I think that I put in TD1 is gradient descent in the notes, just so you see an example. A story-- rule of the game in temporal difference learning, derivations, and proofs, is you start expanding these sums, and terms from time n and terms from time n plus 1 cancel each other out in a gradient way. And you're left with something much more compact. That's why [? everybody ?] calls it an algebraic trick, why these things work. But because these are not random samples drawn one at a time, they're actually directly related to each other, that's why it makes it more complicated. OK. So we said off-policy evaluation says, execute policy pi 1 to get pi 1 generates s n a n trajectories. But you're going to do the update alpha plus-- I'm going to just write quickly here-- g s, a plus gamma Q pi-- this is going to be estimator Q pi 2-- s n plus 1 pi 2. What would pi 2 have done in kind of s n plus 1? In general, we'll multiply it by [INAUDIBLE].. That's a really, really nice trick. Let's learn about policy 1-- or policy 2 while we execute policy 1. OK. So what policy 2 should we learn about? And again, these are-- I'm asking the questions in bizarre ways, and there's a specific answer. But [INAUDIBLE] ask that question. Our ultimate goal is not to learn about some arbitrary pi 2. I want to learn about the optimal policy. I don't have the optimal policy. But I have an estimate of it. So actually, a perfectly reasonable update to do, and the way you might describe it is, let's execute-- I'm putting it in quotes, because it's not entirely accurate, but it's the right idea-- execute policy 1 but learn about the optimal policy. And how would we do that? Well-- this is now my shorthand Q star here. Estimate of Q star is s n plus 1-- I should have-- It makes total sense. Might as well, as I'm learning, always try to learn about policy which is optimal with respect to my current estimate J [? hat. ?] And this algorithm is called Q-learning. OK. It's the crown jewel of the value-based methods [INAUDIBLE]. I would say it was the gold standard until probably about [? '90-- ?] something like that. When people started to do policy gradient stuff more often. Even probably halfway through the '90s, people were still mostly [INAUDIBLE] papers about Q-learning. and there was a movement in policy gradient. AUDIENCE: So is your current estimate Q star not based on pi 1? RUSS TEDRAKE: It is based on data from pi 1. But if I always make my update, making it this update, then it really is learning about pi 2. AUDIENCE: Isn't pi 2 what you're computing with this update? RUSS TEDRAKE: Good. There's a couple of ways that I can do this. So in the policy-- in the simple policy iteration, we use [INAUDIBLE] evaluate for a long time, and then you make an, update, you evaluate for a long time, and you make an update. AUDIENCE: This is dynamically-- RUSS TEDRAKE: This is always sort of updating, right? [INAUDIBLE] And you can prove that it's still a sound algorithm despite [INAUDIBLE]. This is always sort of updating its policy as it goes. Compared to this, which is more of the-- learn about pi 2 for a while, stop, [INAUDIBLE] pi 3 for a while, stop, this is trying to go straight through pi. OK, good. So what is it-- what's required for a Q-learning algorithm to converge? So even for this algorithm to converge, in order for pi 1 to really teach me everything there is to know about pi 2, there's some important feature, which is that pi 1 and pi 2 had better pick the same actions with some old probability. So off-policy works. Let's just even think about-- I'll even [INAUDIBLE] first in the discrete state and discrete actions and the Markov-- MDP formulations. Off-policy works if pi 1 takes in general all state-action pairs with some small probability. If pi 2 took action [INAUDIBLE] state 1 and pi 1 never did, there's no way I'm going to learn really what pi 2 is all about. [INAUDIBLE] show you those two [INAUDIBLE].. OK. So how do you do that? If you just-- if you're thinking about greedy policies on a robot, and you've got your current estimate of the value, and you do the most aggressive action on the acrobot, I'll tell you what's going to happen. You're going to visit the states near the bottom, and you start learning a lot. And you're never going to visit the states up at the top when you're learning. So how are you going to get around that on the acrobot? And the acrobot is tough, actually. But the idea is, you'd better add some randomness so that you explore more and more state and actions. And the hope is that if you add enough for a long enough time, you're going to learn better and better policies, you're going to find your way up to the top. So the acrobot is actually almost as hard as it gets with these things, where you really have to find your way into this region to learn about the region. In fact, my beef with reinforcement learning community is that they learn only to swing up to some threshold. They never actually [INAUDIBLE] to the top. If you look, there's lots of papers, countless papers, written about reinforcement learning, Q-learning for the acrobot and things like this, and they never actually solve the problem. They just try to [INAUDIBLE] at the top. But they don't do it. They just get up this high. Because it is sort of a tough case. So how do you do this? So you get-- like I said, in order to start exploring that space, you'd better add some randomness. So one of the standard approaches is to use epsilon greedy algorithms. So I said, let's make pi 2 exactly the minimizing thing. That's true. But if you execute that, you're probably going to [INAUDIBLE] much better to execute a policy pi s, where the-- let's say the policy I care about, I'm going to execute with probability-- sort of, you flip a coin, you pull a random number between 0 and 1. If it's greater than epsilon, then go ahead and execute policy you're trying to learn about, but execute some random action otherwise. OK. So every time I-- every dt I'm going to flip a coin, keep it-- well, not a coin. A hundred-sided coin, a 100 to-- 0 to 1, a continuous thing. If it comes out less than epsilon, I'm going to do a random action. Just forget about my current policy, pick a random action. It's a uniform distribution over actions. Otherwise, I'll take this, the action from my policy. And by virtue of having a soft policy learning thing is I can still learn about pi 2, even if I'm taking this pi epsilon. But I have the advantage of exploring all the state-actions. Good. I'm missing a page. AUDIENCE: Is that the most randomness you can produce since they'll [INAUDIBLE] converge? RUSS TEDRAKE: There's a couple of different candidates. The softmax is another one that people use a lot. And a lot of-- I mean, in the off-policy sense, it's actually quite robust. So a lot of people talk about using behavioral policy, which is just sort of something to try-- it's designed to explore the state space. Actually, my candidate for behavioral policy is something like RRT. We should really try to do something that gets me into all areas of state space, for instance. And then maybe that's a good way to design, to sample these state-action pairs. And all the while, I try to learn about pi 2. So it is robust in that sense. When I say it works here, I have to be a little careful. This is only for the MDP case that it's really guaranteed to work. There's more recent work doing off-policy and function approximators. And you can do that. I don't want to bury you guys with random detail. But you can do off-policy with linear function approximators safely, using an important [INAUDIBLE] when you're dealing [INAUDIBLE]. And that's work by Doina Precup. The basic idea is you have to-- if your policy is changing, like these things, it's changing over time, you'd better weight your updates based on the relative-- AUDIENCE: [INAUDIBLE] necessarily. But that's-- RUSS TEDRAKE: But what you're learning about is the state-- the probability of picking this action for one step and then executing pi 2. And that's still [INAUDIBLE] even pi 2 would never take that action. AUDIENCE: Oh, yeah. Because it's-- OK. RUSS TEDRAKE: So I think it's good. The thing that has to happen is that pi 2 has to be well-defined for every possible state. AUDIENCE: OK. So keeping the Q pi 2 is take a certain [INAUDIBLE] take certain action, then [INAUDIBLE] pi [INAUDIBLE].. RUSS TEDRAKE: Yes. AUDIENCE: OK. Sorry, I lost that [INAUDIBLE]. RUSS TEDRAKE: OK, good. Sorry. Thank you for clarifying it. Yeah, so cool. So I think that still works. OK. So this is good. So let me tell you where you are so far. We've now switched from doing temporal difference learning on value functions and temporal difference learning on Q functions. And a major thing we got out of that was that we can do this off-policy learning. You put it all together. [INAUDIBLE] back into my policy iteration diagram, and what we have, we've defined the policy evaluation, that's the TD lambda. We defined our update, which could be this in the general sense. This one is-- if I used pi 1 again, If I really did on-policy, if I used pi 1 everywhere while I'm executing pi 1, then this would be called SARSA, [INAUDIBLE] sort of on-policy Q-learning, on-policy updating. And Q-learning is this, where you use the middle gradient. And what we know, what people have proven, the algorithms were in use for years and years and years before it was actually proven, even in the tabular case, where you have finite state and actions. But now we know that this thing is guaranteed to converge to the optimal policy, that policy iteration, even if it's updated at every step, is going to converge to the optimal policy and the optimal Q function, given that all state-action pairs are [INAUDIBLE] in the tabular case. If we go to function approximation, if you just do policy evaluation but not update, then we have an example where this is actually in '02 or something like that. It'd be '01 or '02, 2001. We finally proved that off-policy with linear function approximation would converge when the policy is not changing-- no control. So the thing I need to give you before we consider this a complete story here is, can we do off-policy learning? Can we do our policy improvement, update stably with function approximation? And the algorithm that we have for that is our least squares policy iteration. Do you remember least squares temporal difference learning? Sort of the idea was that if we look at the stationary update-- maybe I should write it down again. [INAUDIBLE] find it. If I look at the stationary update, if I were to run an entire batch of-- I mean, the big idea is when you're doing the least squares, is that we're going to try to reuse old data. We're not going to just make a single update, spit it out, throw9t away. We're going to remember a bunch of old state-action pairs, trying to make a least squares update with just the same thing as [INAUDIBLE]. In the Monte Carlo sense, it's easy. It's just function approximation, where with this TD term floating around it's harder. So we had to come up least squares temporal difference learning. And in the LSTD case, the story was we could build up a matrix using something that looked like phi of s gamma phi transpose-- so it's ik. Let me just write ik in here-- ik plus 1 minus phi of ik. times my parameter vector. And b-- some of these terms was e, which were phi ik times our reward times-- And if I did this least squares solution-- or I could invert that carefully with SVD or something like that-- then what I get out is the-- it jumps immediately to steady-state solution TD lambda. So this is essentially the big piece of the TD lambda update broken into the part that depends on alpha, the part that doesn't depend on alpha. I could write my batch TD lambda update as alpha equals alpha plus gamma a alpha plus b. And I could solve that at steady-state [INAUDIBLE].. All right. So least squares policy, least squares temporal difference learning, is about reusing lots of trajectories to make a single update that was going to jump right to the case where the temporal difference learning we've gotten through. We just replayed it a bunch of times. Now the question is, the policy is going to be moving while we're doing this. How can we do this sort of least squares method to do the policy iteration up here? Again, the trick is pretty simple. We've just got to learn the Q function instead, [INAUDIBLE] biggest trick. So in order to do-- control not just evaluate a single policy but actually try to find the optimal policy, first thing we have to do is figure out how to do LSTD on a Q function. And it turns out it's no-- yeah, what's up? [INAUDIBLE] It turns out if you keep along, [INAUDIBLE] exactly the same form as we did in least squares temporal difference learning, but now we do everything with functions of s and t. [INAUDIBLE] transpose on it. Now I do the-- you said form of the [INAUDIBLE] too much. I just want you to know the big idea here. Then I do gamma [INAUDIBLE] a inverse b, this whole time we're representing our Q function. Q hat s, a is now a linear combination of nonlinear basis functions on s and a. AUDIENCE: Shouldn't that be transpose [INAUDIBLE]?? RUSS TEDRAKE: I put-- I tried to put a transpose with my poorly-- throughout everything. So you're saying this one shouldn't be transpose? AUDIENCE: [INAUDIBLE] should be a [INAUDIBLE]?? RUSS TEDRAKE: Yeah. Good. So I'm going to-- but this one, I wrote this whole update as the transpose of the other-- of what I just wrote over there. AUDIENCE: [INAUDIBLE] write alpha? [INAUDIBLE] If it was a plus b some stuff? RUSS TEDRAKE: Yeah. And there's an alpha, sorry. Thank you. Well, actually, it's-- the alpha is really not there. Yeah, I should have written it here. It's a times alpha. That's what it makes the update on. So we get [INAUDIBLE] alpha. So that is actually [INAUDIBLE]. This is the one I did [INAUDIBLE].. OK, good. So it turns out you can learn a Q function just like you can learn a value function, with storing up these matrices, which are what TD learning would have done in a batch sense, and then just taking a one-step shot to get directly to the solution for temporal difference learning for Q function. And again, I put in here s prime. And I left this a little bit ambiguous. So I can evaluate any policy by just putting in that policy in here and doing the replay. And it turns out if I now do this in a policy iteration sense, LSPI-- Least Squares Policy Iteration-- basically, you start off with an initial guess, we do LSTDQ [INAUDIBLE] to get Q pi 1. And then you repeat, yeah? Then this thing, that's enough to get you to-- it converges. Now, be careful about how it converges. It converges with some error bound to pi star Q star. The error bound depends on a couple of parameters. So technically, it could be close to your solution and oscillate, or something like that. But it's a pretty strong convergence result for this for policy improvement with approximate value function. In a pure sense, it is-- you should run this for a while until you get a good estimate for LSTD and you get your new Q pi. But by virtue of using Q functions, when you do switch to your new policy, pi 2, let's say, you don't have to throw away all your old data. You just take your old tapes and actually regenerate a and b as if you had played off those old tapes with-- as if you had seen the old tapes executing the new policy. And you can reuse all your old data and make an efficient update to get Q pi [INAUDIBLE].. Least squares policy iteration. Pretty simple. OK. I know that was a little dry and a little bit-- and a lot. But let's make sure we know how we got where we got. So there's another route besides pure policy search to do model-free learning. All you have to do is take a bunch of trajectories, learn a value function for those trajectories. You don't even actually have to take the perfect-- your best controller yet. You could take some RRT controller, something that's going to explore the space and try to learn about your value function. Learn Q pi through these LSTD algorithms, you could do a pretty efficient update for doing Q pi. You can improve efficiently by just looking over the min over Q, and pretty quickly iterate to an optimal policy and optimal value function, only storing-- the only thing you have to store in that whole process is the Q function. And in the LS case, the LSTD case, you remember the tape-- the history of tapes, just you use them more efficiently. Yeah? AUDIENCE: So could you have used this on the flapper that John showed? Or what's the-- RUSS TEDRAKE: Good. Very good question. That's an excellent question. So in fact, the last day of class, what we're going to do is going to-- the last day I present in class, we're going to-- I'm going to go through a couple of sort of case studies and different problems that people have had success and tell you why we picked the algorithm we picked, things like that. So why didn't we do this on a flapper? The simplest reason is that we don't know the state space. It's infinite dimensional in general. So that would have been a big thing to represent a Q function for. It doesn't mean-- it doesn't make it invalid. We could have learned, we could have tried to approximate the state space with even a handful of features, learned a very approximate Q function, and done something like actor-critic like we're going to do next time. But I think in cases where you don't know the state space, or the state space is a very, very large, and you can write a simple controller, then it makes more sense to parameterize the policy. It really goes down to that game, that accounting game, in some ways, of how many dimensions things are. But in a fluids case, you could have a pretty simple policy from sensors to actions which we could twiddle. We couldn't have an efficient value function. Now, there are other cases where the opposite is true. The opposite is true, where you have a small state space, let's say, but the resulting policies would require a lot of features to parameterize. But I think in general, the strength of these algorithms is that they are efficient with reusing data. The weakness is that-- well, the weakness a few years ago would have been that they big blow up the time. But algorithms have gotten better as we [INAUDIBLE] we have some convergence guarantees. Not the general [INAUDIBLE]. I never told you that it converged if you have a nonlinear function approximator. We'd love to have that result [INAUDIBLE].. We won't have it for a while. But in linear function approximator sense, we have both. But there's a lot of success stories. These are the kind of algorithms that we're used to play backgammon. There are examples of them working on things like the [INAUDIBLE]. But in the domains that I care about most in my lab, we tend to do more policy gradient sort of things. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_1_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: OK. Welcome to Underactuated Robotics. I'm glad you're all here. Can I get a quick show of hands, actually, who considers themselves hardcore robotics types? Don't be shy. It's a good thing to do. Who's here to see what this is all about? OK. Good, good. So I've thought of course twice before. This is the first time that it's a TQE course for Area 2. So I'm hoping to excite a slightly more general audience. And your responsibility, then, is to ask me questions if I say things that if I assume prior knowledge that people don't have and make sure that everything is coming across. The only real prereqs that I assume for the course are basically some comfort with differential equations and Ordinary Differential Equations, ODEs. I assume some comfort with linear algebra. And we use a lot of MATLAB, so it helps if you know MATLAB. What I don't assume is that you're-- I mean, it's great if you have taken previous robotics courses. That's certainly a good thing. But I'm not going to assume everybody knows how to crank out the kinematics or dynamics of rigid body manipulators and things like that. So hopefully, if you've got a background in here, then everything else should follow. And the course has got a set of course notes that will be published on the website just after the lectures. And they should be pretty much contain what you need to know, or at least have links to the things you need to know. OK. So today is the warm-up. I want to start with some motivation. I want to make sure everybody leaves understanding the title of the course, if nothing else. So I want to tell you some motivation why I think underactuated robotics is such an important topic in robotics. I'm going to give you some basic definitions, including the definition of underactuated. I'm going to go through an example with working out the equations of motion for a simple system to a review of dynamics. And then I'm going to tell you quickly, in the last minutes, everything else you're going to learn in the course. And then we'll go through that more slowly in the next 25 lectures. OK. So let me actually start with some motivation. That involves plugging in this guy. OK. How many people have seen ASIMO before? Good. OK. So ASIMO-- let's watch the little promotional video for Honda's ASIMO. This is a few years old now. But ASIMO absolutely is the pinnacle of robotics engineering over the last 20 years, I'd say, even. So Honda, turns out, without telling anybody, was working on walking robots from the early '80s till they announced to the world in 1997 that they'd started building these walking robots. And they started putting on these shows where they'd kick soccer balls and walk up stairs. It's absolutely, absolutely amazing. I mean, this is it. This is what we've been waiting for in robotics for a very long time. This is a humanoid robot absolutely doing fantastic things. It's a marvel of engineering. The precision, the amount of computation going on in there-- it's something we've been waiting a long time for. OK. But let's watch it again and be a little more critical this time. So what's wrong with ASIMO when it's walking? OK, it looks a little stiff. It looks like a guy in a space suit. If you look carefully, you'll notice that it's always got one foot flat on the ground. That doesn't look quite right. OK, well, we'll forgive the goalie. He's work in progress, I guess. But the whole thing just looks a little bit like a machine that's, let me say, not comfortable with its own dynamics. It's taking a very conservative approach. The fact that it can go upstairs is really remarkable. It does have to know exactly where those stairs are and the geometry of the stairs. But you see, it's taking a very, very conservative approach to walking. It's always walking with its knees bent, its feet flat on the ground, and has this rigid astronaut walk to it. OK. Why is that bad? It's bad because it requires a lot of energy, first of all. So just imagine walking around with your knees bent all the time. It turns out, ASIMO uses 20 times as much energy if you scale out mass and everything as a human does when it walks-- 20 times. And that makes a practical difference because the batteries it's got in its belly only last 26 minutes, and it's a lot of batteries. It matters because it's walking-- because of this very conservative approach to walking, it's walking a lot slower than you or I would. They actually have top running speeds of ASIMO. They're six kilometers an hour. But that's a little bit below where you are I would sort of comfortably transition from walking to running if we were just going down the street. So it's considerably slower than a human when it runs. And although there's some really amazing videos of walking on stairs and things like that, the videos you won't see are the ones of it walking on terrain it doesn't know everything about or even uneven terrain. It doesn't do even train particularly well. OK. So in some ways, ASIMO is the very natural progression of what robotic arm technology, which started in factory room floors, matured into a walking robot. OK. It's a very high-gain-- we'll talk a lot about what that means. It's a very high-gain system. It's using a lot of energy and feedback in order to try to rigidly follow some trajectory that it's thought a lot about. And it's doing that in a very conservative regime-- feet flat on the ground, knees bent. So there's a different approach to walking out there. This one was built by Cornell. It's called a passive dynamic walker. It's almost not a robot whatsoever. It's a bunch of sticks and hinges with ball bearings at the joints. But if you put this thing on a small ramp going down and give it a little push, look what it can do. That's just falling down a ramp-- completely passive machine powered only by gravity. So these passive walkers are a fantastic demo. I mean, it's unbelievable that they can build these things that walk. It's a glorified slinky. But it's walking like you and me, right? Probably more so. Most people would say that looks a little bit more like the way we walk than ASIMO does. But what's really amazing about it is it says that this really conservative, high-gain, feet flat on the ground approach to walking, it's certainly not a necessary one. And it suggests that if you really want to do some energy-efficient walking, maybe even more robust walking, then what you need to do is not cancel out your dynamics with high-gain feedback and follow some trajectory. You need to think a lot about the dynamics of your robot. So this is just the dynamics of the robot doing all the work-- no control system, no computers, nothing. OK. So that's a story in talking about why maybe dynamics matter a lot. And we should really start by understanding the dynamics before we do a whole bunch of robotic arm control. It's actually true in a lot of different fields. My other favorite story these days is flying things. So if you look at state-of-the-art military aircraft, this is an F-14 landing on an aircraft carrier. Even in the most highly engineered control systems we have, it tends to be that aircraft stay in a very, very conservative flight envelope. The same way that ASIMO stays with its feet flat on the ground and does this really stiff control, the airplane stays at a very low angle relative to the oncoming flow and uses high-gain control in order to stabilize it. So fighter jets-- you might know, the patent fighter jets tend to be passively unstable, and control systems are doing amazing things to make these guys work. But they're not doing some basic things that you can see every time you look out your window. So here's a cardinal doing sort of the same thing-- landing on an aircraft carrier, landing on a branch-- about the same thing. But unlike the airplane, the cardinal's got his wings spread out way out to here. And what that means, if you know anything about aerodynamics, if you take a wing and the airflow is moving this way, and you have it at a low angle relative to the oncoming flow, then you have a very simple attached flow on the back of the wings. And it turns out that linear control and linear models of the dynamics work pretty well in that regime. If you go up to a small angle of attack, like the fighter jet's doing, then the air can bend around the wing. Everything still stays attached, they say, to the wing. And you can still do linear control ideas. OK. But if you go up and stall your wings out-- that's what's happening here. If you go up to a higher angle of attack, the air can no longer bend around the wing fast enough. Something more dramatic happens. You get a big vortex in this picture. And what happens is the flow gets much more unsteady. It starts shedding vortices. And you start stalling your wing. Now in these regimes so far, the dynamics have proven very, very complicated to even understand, even model, and considerably harder to control. The birds doing it every day of the week. Somehow, we don't know exactly how, but he does it all the time. And he does it with gusty environments. He does it when the branch is moving. He probably misses every once in a while. But he's doing a pretty darn good job. There's a reason why he does it too, right? It's not just to show off or something. But if you are willing to go into this more complicated flight regime by stalling your wings, and if your goal is stopping, then, actually, you get a huge benefit by going up to that high angle of attack. Not only do you get more drag just because you have more surface area exposed to the flow, but when you start getting separation on the back of your wing you leave a pocket of low pressure behind the wing. The air can't quite come in and fill in the space right behind the wing. And that acts like an air brake. It's called pressure drag. So the birds-- the planes are coming into this conservative approach in order to maintain control authority. The birds are going [GRUNTS],, hitting the air brakes and coming to still somehow doing enough control to hit that perch, which is kinematically more difficult, I think, than even hitting an aircraft carrier. All right. So In my group, we've been working on trying to make planes do that kind of thing. So this is our airplane that comes in and tries to execute a very high angle of attack maneuver in order to land on a perch. It's a slow-motion shot, slowed down so you can see what happens. And actually, just to convince you that the flow is complicated. This isn't our best flow of visualization, but it shows you what's going on here. The airflow, as it comes in, this is now that same plane with-- we're emitting smoke from the front of the wing, the leading edge of the wing. And it comes in at a low angle of attack. And you can see the air is mostly-- it's actually already stalled because it's a flat plate. But the air is on top of the wing in the same way I showed you in that picture. And as you go up to a high angle of attack, you get this big tip vortex that rolls over. Everything gets a lot more complicated. So the point I'm trying to make with these two examples are first of all, robots today are just really, really conservative. Dynamically, they're very, very conservative. They're operating just a fraction of the level of performance that they should already expect given the same mechanical design. With the same mechanical design, a simple little plane but better control, we can start doing things that look more like birds. And the second point that I'm going to make more formally in a minute is that the underactuated systems, underactuated robotics is essentially the art, the science of trying to build machines which use their dynamics more cleverly instead of trying to build control systems which use actuation motor output in order to override the dynamics. We're going to do an underactuated system that just pushes and pulls the natural dynamics, tries to do these more exciting dynamic things. As a consequence, we need to do smarter control. This is a computer science course. So what have I said anything to do with computer science yet? I believe that there's new techniques from computer science, machine learning, motion planning that are basically changing the game. It's allowing us to solve some of these control problems that haven't been solved before. Just to throw a few more cool examples out-- so if more willing to do more clever things with our dynamics, then there's just countless things that we can do. So if you just care about efficiency-- this is a wandering albatross. If you just measure the energy in some metabolic estimate of the energy it consumes versus the distance it travels, then it's about the same as a 747, which is actually cool because they're quite different sizes. But if you just do some dimensional cost of transport, it actually works out to be almost the same efficiency as a 747. So maybe we haven't gained anything. But it turns out if you look more carefully, the albatross uses the same amount of energy when it's sitting on the beach as it does when it's flying across the ocean. So this guy can fly for hours, days without ever flapping its wings. Go across the entire ocean that way, into the wind, out of the wind-- you name it-- because they're sitting there and they're just riding on gradients due to the wind over the ocean. So the energetic cost this guy's experiencing is just digestion and things like that. He's actually not doing hardly any mechanical work in order to fly across the ocean. OK. If you care about maneuverability, you know, so falcons have been recently clocked diving at 240 miles an hour. This one is pulling out of that 240-mile-an-hour dive to catch a sparrow. That's pretty good. I mean, planes-- in terms of sheer speed, planes are going to win every day of the week. But if you look at some sort of dimensionless aspect of maneuverability, then birds are still the masters of the sky. Bats can actually be flying full speed this way in two flaps, which turns out to be just under half of the wingspan, they can be flying full-speed the other way-- two to five flaps. This guy's at Brown have been recording this. Obviously, bats are actually some of the most maneuverable. They can fly at high speeds through thick rain forests. They can fly in caves with 1,000 other bats on top of them. At least, that's what I get out of the movies. [LAUGHTER] And they're doing all these just incredibly dynamic things in ways that our control systems can't even come close to right now. And this is one of my favorite videos of all time. Again, the story about efficiency. This is a fish now, not a bird. But it's almost the same thing. They're both operating in a fluid. This is a rainbow trout. So the rainbow trout are the fish that swim upstream at mating season. So every year, they make their march up the streams. It turns out if you watch these rainbow trout, they tend to hang out behind rocks. Someone thought, it seems like it's tiring work going upstream maybe there's something clever going on when they're hanging out behind these rocks. So what they did is they took that rainbow trout out, and they put it in a water tunnel. And you're looking at a view from above of the fish swimming in the water tunnel. This is George Lauder's lab at Harvard. And that's what it looks like when it's just swimming in open water. If you take that same fish, put it in the water tunnel but put a rock in front of it-- now if you've looked at rocks in a river, behind a rock, you'll get some swirling vortices, some eddies. The fish behind the rock just completely changes the kinematics of its gait. So that's suggestive that there's something clever going on here. But this is the clincher here. The dynamics matter if you're a fish. This is now a dead fish. It's a dead fish. There's a piece of string making sure it doesn't go back and get caught in the grates. That would be messy. But it's slack. As it's moving around, you'll see when the string is catching. This is our rock now. It's a half cylinder. The water's going this way. There's going to be swirling vortices off the back of that rock. Let's see what happens if you put a dead fish behind the rock. So the vortices are knocking the fish around. That's not too surprising. What's really cool is when the dead fish starts swimming upstream. That's pretty good. So the water's going that way, and the fish just went that way. And it's dead. So dynamics matter if you're a fish. And if you care about birds-- mechanically, we're capable now of building robotic birds. This is our best copy of a hobby ornithopter. But we can build birds that fly around with Derek at the controls. But if you asked me how to control this bird to make it land on a perch, pull energy out of the air, we haven't got a clue. We're working on it. We haven't got a clue. Yeah, that hit a tree. And Derek recovered. We've got the first flight of our big 2-meter wingspan. This is an autonomous flight. You can tell it's autonomous because it flies straight into it about runs into the building and then we hit the brakes, and it has to go in and hit the trash can. But mechanically, we're close to where we want to be to replicate some of nature's machines. We've got a long way to go. But really, we're at the point where we have to figure out how to take these magnificent dynamical machines and control them. And that's what the course is about. OK. So let's get a little bit more careful in what I mean by underactuated. Systems. So we'll give you some motivation. We're going to try to make robots that run like humans-- run like gazelles, swim like dead fish, and fly like that falcon that comes down-- so not a tall order at all, right? So in order to start doing that, let's start by just defining what it means to be underactuated. Let me ground things in an example. Let's take a two-link robotic arm. I'm going to describe the state of this system with theta 1 and a relative angle here, theta 2. And let me quickly draw that twice as big. We'll parameterize it here with the-- that'll be L1 with the length. This will be able L2 here. So we call this mass 1, mass 2. So we'll assume it's a massless rod with a point mass at the end, just to keep the equations a little cleaner for today. And it's got two angles that I care about. And there's a couple lengths at play. So throughout the class, I'm going to use the convention that q is a vector of the joint angles or the coordinates of the robot. In this case, it's going to be theta 1 and theta 2. If I have motors on the robot, let's say, I can apply a torque here because I have a motor at the elbow, maybe a torque here. So I'll call this torque 1, this torque 2. I'm going to call that u, vector u. There's all the inputs to the system. So this is the joint coordinates. These are the control inputs. OK. So it turns out if you want to write out the dynamics of this system, if you want to be able to, say, simulate the way that this pendulum moves, well, most of the robots we care about in this class are our second-order. So everything's based on their mechanical system. So we've got F equals ma governing everything. In this case, a is going to be the second derivative of q. So what I really have-- I should also say that q-dot is going to be the joint velocities. And q-double-dot is the joint accelerations. So if I want to describe the motion of these systems, of this kind of a robot, then what I need is to find an equation that tells me the acceleration of that system based on the current position, velocity of the robot, and the current control inputs. If we're living in second-order mechanical systems world, then that means I'm looking for a governing equation-- equations of motion of the form f is some nonlinear function of q, q-dot, u, potentially time too, if it's a time-varying dynamics, if there's something else going on clocked by time. So basically, the entire class, we're going to be looking at second-order systems governed by some non-linear equations like them. It turns out that actually most of the robots we care about, there's even a simpler form. Turns out we can almost always find-- it's not always-- but for many robots, we find that the equations of motion are actually linear in u. If I'm going to put in torques or something to my robot, that it turns out that the way that the torques affect accelerations is linear in the input. So let me write a second form, which takes advantage of that observation. OK. So I've said almost nothing here. I'm just saying there's some nonlinear terms that depend on the current state, and velocity, and time. There's some other nonlinear terms that get multiplied by u. But the only power in this is that I'm saying that the whole thing is linear in u. And it turns out to be-- I'll convince you as we go forward that that's true. OK. So here's our-- we're finally grounding everything here. Let me tell you what fully actuated means. Just think about what this equation is too. So q is a vector. Q-double-dot is also a vector. In this case, q was a 2 by 1 vector. Q-dot then is also a 2 by 1 vector. Q-double-dot is also a 2 by 1 vector. So this is a vector equation of a 2 by 1 vector in this case. This is some vector-- 2 by 1 vector. In my case, I had also two control inputs. So this is also a 2 by 1 vector, which means what is this thing going to be? That's going to be a 2 by 2 matrix in that example. What matters, what makes life dramatically easier, and what most robots today have really assumed is that they assume that F2 is full rank. So a robot of this form is fully actuated if the rank of F2, q, q-dot, and time is equal-- I'll write the dimension of q here-- its full rank. OK. Why does that matter? What does that mean first? What it means is that if I know F1 and F2, then I can use u to do anything I want to q-double-dot. OK. I'll show you that. I can say that right now pretty explicitly. So let's say I know exactly what F1 and F2 are. Let's say I choose a control law. I want to choose u as some function-- I'll just call it pi-- of q, q-dot, and time. So I want to come up with a controller which looks at my positions, and my velocities, what time it is, and comes up with a torque. Let's say, I did F2 inverse q, q-dot, and time times negative F1, q, q-dot, time plus, I don't know, some other controller, but I want u-prime, let's call it. So if the rank of F2-- if it's full rank, if it's got the same rank as the dimension of q, then that means that this F2 inverse exists. And I think that if you plug this in for u right there, what you're going to see is that this cancels out this. If I did it right, then, and this cancels out this. And what I'm left with is a simple system now. Q-double-dot equals u-prime. AUDIENCE: Shouldn't that be q-double-dot [INAUDIBLE]?? RUSS TEDRAKE: Where do you want q-double-dot? AUDIENCE: Is that u? RUSS TEDRAKE: This is u-prime. So just some other u. So what I'm trying to do is now say that I'm going to effectively change the equations of motion of my system into this, u-prime. I might have called that maybe q-double-dot desired or something like that. That would be fine too. So what did we just do? We did a trick called feedback linearization. I took what was potentially a very complicated nonlinear equations of motion, and because I could, using my control input, command every q-double-dot, I can essentially effectively replace the equations of motion with a simpler equation. This is actually a trivially simple equation. For those of you that know, this is what would be a series of single inputs, single outputs systems. They're decoupled. So q-double-dot 1 is equal to the first element of this. It's just two vectors. So that just looks like a trick. I'm going to ground it in an example in a second here. But first, let's finish defining what underactuated means. So what is underactuated going to mean? AUDIENCE: That the other two matrices is important. RUSS TEDRAKE: That's right. Good. Yeah, a system of that form is underactuated if the rank of F2, q, q-dot, and time is less than the dimension of q. In words, what underactuated means-- a system is under actuated if the control input cannot accelerate the system in every direction. That's what this equation says. If the control input u cannot produce. That's just what the equation said. You could imagine if the form of the equations wasn't linear in u, then we'd have to adapt this rank condition to do this. But this, I think, for today is a very good working definition of underactuated. And we'll improve it as we go through the class. There's a couple of things to note about it. First of all, as I've defined it here, whether you're underactuated or not actually depends on your state. So you could say that a robot was fully actuated in some states and underactuated in other states. Now why would that happen? Maybe there's a torque limit, or there's an obstacle or something like this that prevents you from producing accelerations when you're in some configurations. Intuitively, what's happening in ASIMO is that it's trying to stay in this very conservative regime because then those are the states where it can act like it's fully actuated. And if it was running like you or me, then it's underactuated. But what I want to try to impress on you is that this dichotomy between fully actuated and underactuated systems, it's pervasive. I mean, so robotics, for the last 30-some, easily, years, has almost completely made this assumption that F2 is full rank when designing controllers. If you learn about adaptive control, all these manipulated control ideas, computer torque methods-- all these things-- you're implicitly making this assumption that you can produce arbitrary torques. You can use arbitrary control effort to produce arbitrary accelerations. What that does, that's why all these proofs exist for adaptive control and the like because you can then effectively turn your system into a linear system that we know how to think about. And the reason that dichotomy is so strong is because what happens if you're a control designer and you don't have the ability to take your nonlinear system and turn it into a linear system is that you have no choice but to start reasoning about the nonlinear dynamics of your system, reasoning about the long-term nonlinear dynamics of the system. And analytics break down pretty quick. But computers can help. That's why we're revisiting this kind of idea. So factory room robotic arms tend to be fully actuated, except for very exceptional cases. Walking robots and things like that, as we'll see, are underactuated. So let's do that example. AUDIENCE: So are you implying that in order to have agile robots, we need to have underactuated robotics? RUSS TEDRAKE: Absolutely. I'm going to finish making that argument, but absolutely. That's exactly what I'm saying. The question was, am I implying that we need to do underactuated robotics to have agile robots? Yeah. I would even say that I'm implying-- I'm a little biased-- but I'm implying that every interesting problem in robotics is interesting because it's underactuated. If you think about the problems that are unsolved in robotics-- maybe manipulation, walking, cool flying things-- if you look closely, if the control problem is considered unsolved, it's probably underactuated. The things we know how to do really well-- picking and placing parts on a factory room floor-- that's fully actuated. Now manipulation is hard for many other reasons. You have to find the thing you're manipulating. You have to think about it. But there's something fundamental in robotics research that happens if your system suddenly-- if you don't have complete control authority. And all the interesting problems that are left seem to be underactuated. OK. So instead of talking about abstract F's, let's make it specific. Let's write the equations of motion for our two-link robotic arm. So how many people have seen Lagrangian mechanics? Cool. So the class isn't going to depend on it. I'm going to do it once, quickly. And it's in the notes. If you haven't seen Lagrangian mechanics, it's a good thing to know. And it's in the appendix of the course notes. It'll be posted. But I want you to see it once to just see that there is actually-- if what you care about first is just coming up with the equations of motion, then there's actually a fairly procedural way to do that for even pretty complicated systems. So let's do it for this not-very-complicated system. OK. So let me do that in pieces. So let's say, this is mass 1. Let's say that it's that position x,1. If I call this x,1 and x,2-- x and y-- that makes them a coordinate system there. Let's say that the mass here is that x,1, and the mass 2 is x,2. So the first thing to do is just to think about the kinematics of the robot. And in this case, they're pretty simple. So as I've written it, x,1 is-- what is it? The x position in x,1 here is l times sine or cosine? Sine of theta 1. And the other one is negative L1 cosine theta 1. Now we're going to get bored real quick if I don't adopt a shorthand. So let me just call that l,1, s,1. So that'll be shorthand for sine of theta 1. And this will be negative l,1 cosine, 1. If I want to do the kinematics of x,2 here, that's going to depend on theta 2. It's actually also going to depend on theta 1 because I've got this in a relative frame. That theta 2 is relative to the first link. So it turns out the kinematics of x,2, we can actually start with just x,1. It's the position of x,1 plus another vector, which is l,2 sine of theta 1 plus theta 2. If you work it out, that's the right thing-- cosine theta 1 plus theta 2, which are shorthand as x,1 plus l,2 s,1 plus 2 negative l,2 c,1 plus 2. OK. So the derivatives aren't too bad. I can do those. Let's see. If I want the rate of change of x1, intuitively, that's going to start depending on the joint velocities, right? So how does that work out? The time derivative of x,1 is going to be l,1 cosine theta 1 times theta 1 dot. And then l,1 sine theta 1 times theta 1 dot. And x,2 dot is going to be x,1 dot plus l,2 c,1 plus 2 times theta 1 dot plus theta 2 dot. And l,2 s,1 plus 2 theta 1 dot plus theta 2 dot. So we now have solved the kinematics of the machine. To write the dynamics Lagrangian style we need to think about the energy of the system. So let's call T the total kinetic energy. And in this case, it's pretty simple. This is why I went with point masses. It's 1/2 mv squared, which in vector form, looks like 1/2x 1 dot transpose m,1 x,1 dot plus 1/2 x,2 dot transpose m,2 x,2 dot. OK. And then we're going to define the total potential energy as u. And this it's just mg times the vertical position of the thing. So it's just mass 1 times gravity times I'll call it-- I guess I'll just call it y,1, which is the second element of that. I want to even not introduce new symbol. We'll just do l,1 c,1 negative. And this is minus m,2 g, y,2, which is l,1 c,1 plus l,2 c,1 plus 2. Sorry for going into the corner. OK. But you can all write the kinetic and potential energy of the system. So Lagrangian derivations of the equations of motion just uses this Lagrangian, which is the difference in the kinetic minus potential. And I think a very good exercise is to understand the reason why this works. But for our class, we can actually just use it as a crank that we can turn. If we write this out, and then you do some simple math on it, where this is called a generalized force, it turns out if you plug these in to this equation, turn your calculus crank, then you end up with the equations of motion. You end up with two equations that have the form-- they give you some equations in terms of F, q, q-dot, q-double-dot, it's some function of q, and this is actually where the u's come in. So in the simplest form, it comes up like this. And with a little work, let the call that F-Lagrangian so it's not the same F. With a little work, you can separate out the q-double-dots and get it to the vector equations we were talking about before. OK. If you take those equations that you get and you pop them into MATLAB, and it's pretty simple to start simulating the equations of motion of the two-link arm. This is with zero control input. So this is just what happens if you take some two-link arm, apply no torque, let it go. Then you get this. I put some damping in there extra so we didn't have a demonstration of chaos. But there's a pretty procedural way to go from very simple kinematics, doing some pretty simple energy calculations, and getting yourself to a simulation of even very complicated mechanical systems. So the forward dynamics, we understand. Now it turns out, there's actually very good algorithms for this too. If you have a 100-link robot, you certainly wouldn't want to turn the crank by hand. But you can download good software packages that write recursive versions of this algorithm that have very efficient computation of those dynamics. OK. Now let's start thinking about what it means to have control in that system. It turns out, if you enough of these equations, if you punch in enough different robotic arms and walking robots or whatever-- oh, John yeah? AUDIENCE: I think maybe minus 3L/3q. RUSS TEDRAKE: Yeah, OK. Good catch. OK. So if you start punching these equations enough, then and then you start noticing a pattern. Turns out, even very complicated robotic arms tend to have equations that fall into this stereotyped form. OK. This is almost just f equals ma. This is the mass matrix, the inertial matrix. The c here is the Coriolis terms. The g here is the gravitational terms-- potential terms. And then this is the torques. These are called the manipulator equations. We're going to revisit them. You don't have to have complete intuition about them right now. But what I want you to understand is that if you take the Lagrangian dynamics on some rigid body manipulator, then you're going to get something out in a form that looks like this. Now this is actually a pretty powerful equation. It tells you a lot of things. So there's a q-double-dot term that's multiplied linearly by something that only depends on q. So by leaving q-dot here, I've already strengthened my form by putting q-double-dot in linear here. So arbitrary equations don't fit this. This is a pretty special set of equations. There's some terms that depend on q-dot. And then there's some potential terms which only depend on q. And then we have our joint torque king of things over here. And in fact, there's actually a lot of well-known structure in these equations. So it turns out, I could have written the energy of the system as 1/2 q-dot transpose H, q, q-dot. This inertial matrix analogous to mass is related to the kinetic energy of the system. And what that means actually, just by thinking of it this way, it's well-known that H is positive definite. It's uniformly positive definite. You can't have a negative kinetic energy. And that manifests itself that this matrix H, which appears all the time, turns out to be equivalent to its transpose, its symmetric, and its positive definite. That's shorthand for positive definite. It's a matrix greater than zero. And in fact, if you look at the equations I punched in for the robotic arm, it's exactly just a matter of computing H, C, G, and B, which is the matrix that maps your control inputs into joint torques. So H is a inertial matrix. C is Coriolis. G is gravity. I think B was this because people were running out of letters. I don't know. I don't know a reason to call it B. But in general, B could be a function of q maybe. But it's just some mapping between your control inputs in the torques that you want to get. OK. So knowing that I've taken my simple manipulator, I found equations of motion that take this form. If I have torques to give-- torques at both the elbow and the shoulder, then it turns out for that example, H and c and G all just come from the Lagrangian. And B-- what's B going to be in that example? What size is it going to be first? AUDIENCE: 2 by 2. RUSS TEDRAKE: 2 by 2. And if I'm assuming that my control inputs are exactly the torques, then B is just the 2 by 2 identity matrix. Is this system fully actuated? AUDIENCE: Yes. RUSS TEDRAKE: Why is it fully actuated? AUDIENCE: Because the rank of the [INAUDIBLE] matrix is 2. RUSS TEDRAKE: OK. But there's one other term that was one other part of that statement. AUDIENCE: That's equal to the dimension of q-dot. RUSS TEDRAKE: But I need to get the mapping from q-double-dot u. AUDIENCE: Oh, matrix is positive definite. RUSS TEDRAKE: Yeah. Because the inertial matrices are always also positive definite, that if I actually write out q-double-dot that for these systems, I get an H inverse q times all that stuff-- B, q, u minus C-- oh leave the q-dot G. And we know H inverse exists. I told you it's positive definite. So as long as this thing is full rank, which as you said, it is, then that system's fully actuated. OK. That means I can do anything I want to that system. What should we do that system? What should we do? Let's replace the dynamics with something else. Well, I can't do anything. It's going to have to be two variables or less, the system I want to simulate. I can't make it simulate a whip if I've only got two. But I can make it simulate any sort of two-dimensional second-order system. OK. How about we take our two-link pendulum and make it act like a one-link pendulum. That's a simple enough thing to do. So what I'd do is I'd find the equations of motion for the one-link pendulum, and I just do my feedback linearization trick. I'd cancel it out, and I'd replace the dynamics with the one-link pendulum. All right. So if you can see this, it's just a matter of saying u is C times x-dot. In my MATLAB code and in lecture, I'll use x to mean the combination of q and q-dot. I can just do my exact feedback linear trick-- u is C plus G. Let's see if I can make this a little better. And there's the equations of a simple pendulum with a little damping. In my control system, if I say, lecture1-- I think I put under simple pend, then suddenly, my two-link pendulum-- the dynamics of my two-link pendulum, when I'm simulating those entire dynamics, work out to be the dynamics of my one-link pendulum. So it's maybe not a useful trick. If I really wanted a one-link pendulum, I could have done a one-link pendulum. Let's say I want to do something more clever maybe. Let's invert gravity. Let's take my inverted pendulum problem and make it work by just replacing the dynamics of my pendulum with an upside-down pendulum. So maybe if I want to just get the pendulum to the top, let's just make it act like an upside-down pendulum. So we can do that too. Woop. When I say it the way I'm saying it, I hope it sounds like, of course, if the system's feedback linearizable, you can do whatever you want. It's easy. It's not worth thinking about these kind of things. I mean, that's what I'm trying to communicate. But almost every cool robot that works because of these kind of tricks. They're hidden, but they're there. A lot of the reason robotic arms work as well as they do is because you can do this. Now there's limits, right? You can only do this if you have unlimited torque. In practice, a lot of robotic arms have almost unlimited torque to give. They've got big gearboxes, right? You'd be surprised how pervasive this idea is. So what this class is about is what happens if you can't do that? All right. So let's take our hour two-link arm. How are we going to break it? How are we going to make it so we can't do that anymore? What's a more interesting problem? AUDIENCE: Get rid of one motor. RUSS TEDRAKE: Get rid of a motor. Let's get rid of the shoulder motor. That seems like an important one. Let's see what happens if we take that right out of there. So the equations of motion actually stay exactly the same, except for now, B of q is going to have to be smaller if u is now just one-dimensional. I've got a single control input. And B of q is just going to be what size? AUDIENCE: 1 by 1? RUSS TEDRAKE: It's going to be-- it's got to get to a two-dimensional thing. So it's going to be a 2 by 1. And let's say if as I drew it, that 2 by 1 is going to have nothing to do to the shoulder motor. If I assume the first one is the shoulder, it's going to have direct control of the elbow. Suddenly, it's a whole different game. Turns out, you can still solve that problem. I wasn't thinking of showing this. But let me preview something to come quickly here. This is exactly that system. It's a system we're going to talk about. It's called the Acrobot. It's got inertia in the links instead of the mass. And if you take these computer science techniques I'm going to tell you about, then you can, for instance, find a solution for the torque at the elbow to try to make this thing go to the top. If you think about it, it's actually-- it's called the Acrobot because it's like an acrobat on the high bar, where you don't have-- you can only give a little bit of torque at the wrist. You can do a lot with your waist, potentially. So this, if you do a clever job, you can actually pump up energy and swing up and get to the top. But that's a lot harder problem. And I can't write that down in a single board here at 72-point font. But we're going to do that very, very soon. So I hope you know what underactuated means now. Why would I care about a system that's missing its shoulder motor? That seems pretty arbitrary. If I'm building a robot, I might as well order enough motors to put them everywhere. It turns out if you care about walking robots, one of the simplest models of a walking robot is called the compass gait robot. It's got a mass at the hip. It's got two legs. We can even assume it's got a pin joint here. That's the connection to the-- that's the foot on the ground. And it's got to torque to give here at the hip. But it can't apply torque to the ground. It's not because it's not an artificial. If I had a foot, then suddenly, my toe-- somewhere, you're not bolted to the ground. So you've got a bunch of interesting links, and you can apply torque between your links. But the place where you might want it the most-- your shoulder motor, your elbow motor, whatever it is-- the place that connects you to the ground, you don't have a motor. And you can't have a motor unless you're willing to stick yourself to the ground. Suction cups are a viable thing for walking robots, I guess. But the more interesting problem is how do you do control if you don't have to be stuck to the ground? So that two-link simple point mass thing is actually exactly the dynamics of the compass gait walker that we'll talk about fairly soon. OK so I've got no torque here. Torque equals 0 there. Every walking robot is underactuated. The same thing is true. If I'm a humanoid, I'm trying to control all of my state variable. That's the question, right? Do I have enough motors to instantaneously affect every state variable? That's the question. If you count the number of motors on me, it's a lot. They might not be as strong as they used to be. But they're there. There's a lot of them. If you count the number of degrees of freedom, that's hard too. But no matter what your count, your tally adds up to, if I jump, when I'm up in the air-- I'm not going to do that for you. But when I'm up in the air, none of those motors, no matter what I do, I could do something with my arms, whatever, ignoring aerodynamic forces. None of those motors are going to change the trajectory of my center of mass. There's nothing I can do to change the trajectory of my center of mass. I can move relative to my center mass, change my angle of momentum. I can serve angular momentum. I can move things around. But nothing I can do is going to move my center mass. A walking robot-- a jumping robot, for sure, is underactuated. A flying machine is underactuated. I mean, fighter jets are a good example. You can go that way pretty well. You know, they don't go backwards so well, for instance. All right, they don't go directly up so well. Although, I can show you videos of that kind of thing. birds-- you name it. These systems tend to have control variables that you're not in complete control of. Manipulation-- if I'm throwing this chalk around, I don't have complete control of that chalk. If I form a force closure with it, then you can maybe start thinking I'm a fully actuated system. I can move this thing around. That's fine. But I think the interesting part of manipulation is before you get that force closure. OK. So every interesting problem in robotics is underactuated. I'm going to give a quick sketch of what the rest of the term has for you. And then we're actually-- we're going to try something new on the website. So the website's going to contain everything. After today, we're a paperless existence. The website will have your problem sets. It'll have the lecture notes. You can submit your problems that's on the website. We're also going to try a new thing. When I post the PDFs of the problem set, you'll be able to download them and print it out if you like. But you'll also be able to use this sort of interactive PDF viewer where people, instead of having a forum or something on the website, you could go right into the PDF and mark, say, I don't understand what this means. You can choose whether it's anonymous. You can choose whether everybody knows who said it. You can choose if it's just private. In just a minute, I'll show us a demo of that. We'll see if it works. And it might be a cool way to communicate outside of the room. But let me tell you-- let me forecast what's coming here. I haven't actually told you why this is a computer science class yet. So I can't let you leave without that. Here's roughly what we're doing. On Thursday, we're going to talk about the simple pendulum. So we talked about a two-link pendulum just now. We're going to take a step backwards on Thursday. We're going to talk about the dynamics of a simple pendulum. But we're going to talk about everything there is to know about the simple pendulum. And we're going to really think about the nonlinear dynamics and how to think about that. And then we're going to think about how to control that simple pendulum. But as we go in the class, we're going to get to more and more interesting systems. We're going to get to a cart pull system. These are some of the model systems in underactuated robotics. We're going to get to the Acrobot system I just showed you, a two-link thing with a torque here and no torque there. This one has a force here. We're going to think about the toy systems for underactuated robotics. And then we're going to start splintering into different domains. If we care about walking, then we can start thinking about these compass gait-type robots. And we'll talk about more complicated robots. And the key difference between here and here is just we added a few extra degrees of freedom. Here to here, the dimensionality of walking robots isn't actually necessarily that high. But what happens is you have to think about systems with impacts. And you have to think about limit cycles. We'll develop some of those tools. OK. And then we're going to think about how do you get away from these toy systems by making higher-dimensional systems? And it can come from walking too. We'll have, for instance, multi-link robots. And think about how to control the higher-dimensional systems, more Degrees Of Freedom-- DOFs. Then we're going to think about what happens if I take these model systems and add some uncertainty or stochasticity. So a toy example for that might be a walking robot walking on rough terrain, let's say. And then we're going to think about how to take these model systems and what happens if we don't know the model. And that's certainly the case if you've got a little perching airplane, for instance, or a little robotic bird. I have a two-year-old daughter. And I've started being asked to cartoon everything I say. So I'll subject you to some very bad but quick cartoons. OK. So that's the systems we're going to think about and the reasons that they're interesting. Turns out, we're going to take a very computational approach to them. So in this system, we're going to start introducing optimal control. We're going to say, let's say, I want to get the pendulum to the top, but I want to do it, for instance, by minimizing energy or minimizing the time I get there. So we're going to talk about optimal control. And as much as we can, we're going to talk about analytical optimal control. But pretty quick, we're going to run out of things we can do analytically. And we're going to start looking at numerical optimal control-- computer science, again, based on dynamic programming. And that's going to get us somewhere. When we start taking these slightly more interesting systems like this, we're going to develop some better tools. We're going to do numerical optimal control with something called policy search, which is a combination of tools from reinforcement learning, machine learning, and numerical optimization. We'll be able to do some of our impact modeling with that too, I guess. When we start getting into higher and higher dimensional systems, we're going have to give up on the opportunity to completely solve an optimal control problem numerically or analytically. And we're going to start talking about approximate policy search and motion planning. And I'm drawing it like this because I want you to see that we're taking a very spiral course through the class. We're going to develop tools that every time I develop a new tool, we're going to make sure we understand what the heck they do to the pendulum, the cart pull, and things like that, and work back up. So we're going to cover motion planning. If you know randomized motion planning-- RRTs feedback motion planning-- you're going to see that here. And then when you get into the really good stuff here, when you've got uncertainty, stochasticity, and unknown models, then we're going to have to get into pure machine learning approaches in some cases. That looks just like my yellow one. Control based on reinforcement learning, for instance. And that's how we're going to address some of these systems that are more complicated still. OK. So we're going to route everything in mechanical systems because that's what I care about. I want things to move. But we're going to do it in a pretty computer science-y way because I think the computer scientists have turned a corner, and they're going to solve all these problems. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_13_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So today we're going to talk about the dynamics of running. And ask questions as freely as possible, and also if my handwriting's bad, let me know. It generally is. So. So this is actually a really fun lecture. We have a lot of cool videos. I don't know if any of you have seen Raibert hoppers and stuff before, but we have some cool models and some cool analysis too. So hopefully, you'll be as excited as I am to participate in this learning experience. So here we go. So the basic model we'll be looked at today in several different versions of this is the SLIP model. This is the spring-loaded inverted pendulum. All right. It's like the running version of the rimless wheel or compass gait. It's pretty simple. You just have a mass at the end of a pendulum. You have a spring here, spring constant k. The spring is a rest length r naught and a-- yeah, we already said the spring constant. And so then this-- the coordinates, then, are this length r and the angle theta from the upright. So what we're going to want to look at here is not just this model, but also, in the same way we had for the rimless wheel, as you remember, is a 1-D iterated map. So you probably remember that for the compass gait that lets you look at the stability of it, and there's all these interesting dynamics that are captured by that simple system. So we're going to want to look at the 1-D iterated map, and the other thing I'm going to try to convince you is that this is a biologically plausible system. So if you look at the rimless wheel, it feels like something that's walking, compass gait even more so, [INAUDIBLE] knees. These things all-- just intuitively, you can believe that they're-- sorry-- a-- whoa, that's probably going to be bad-- that you could believe that they're walking systems. But this one maybe doesn't seem to you immediately like a running system. Maybe it seems like a jumping system or bouncing system or something, but actually, it actually is a pretty good model for running systems. So the two things that we're going to have to go through, how do you get a 1-D iterated map? Now that maybe is a bit surprising for you because if you look at this, the full state space of this inverted pendulum right now, it has states of r-- sorry-- r theta, r dot, theta dot. Now that's four parameters. You remember when you slice through and you pick a surface of section, you can cut it down by one. So for the rimless wheel, maybe you have theta, theta dot. You cut out theta dot, you're in pretty good shape. Here we have four, but we're actually going to show that we can actually get this down to a 1-D system and actually look at the iterated map for this. So that's kind of cool. And the other thing is that also, this system bounces and takes off, so actually, we need two coordinate systems. One is this r theta, r dot, theta dot. And then when it's in flight, we're going to have x, y, x dot, y dot, all right. And both of these are 4-D, so going to have to deal with that either way. But first the convincing part. So the justification here comes from a field called comparative biomechanics. Now this is a pretty cool field. They look at all sorts of different systems and try to figure out these commonalities in the kinematics or the dynamics or certain properties and figure out, what are the fundamental common features of these different biological systems? So this running model, actually, you can look at it in the context of things from cockroaches, which are on the order of a gram in mass, horses, which are on the order of-- well, some of them, at least, are 135 kilograms. It's not that heavy, really, so you could imagine maybe heavier horses. Crabs. Now you can think about the kinematics of these system and just how wildly different they are. Cockroaches, six-legged, bouncing all around, [INAUDIBLE] tripod gait. Horses, quadrupeds. Crabs, actually they run sideways with eight legs. This is different information, so I won't put it there. But crabs actually will run sideways using their eight legs. It's actually pretty cool. I don't know if you've seen it. And then humans, which maybe is the one that we most-- well, I don't know, that's probably unfairly biocentric. So we're humans. And all of these, despite the very different kinematics, despite, what is this, five orders of magnitude in mass, they all have incredible dynamic similarity. Let's hope I'm not getting graded on spelling, because I'm pretty sure that's way off. [INAUDIBLE] And so if you look at it in the right way, you can actually see that all of these systems, cockroaches, horses, crabs, camels, cats, bunny rabbits, all these things actually look very similar. And so Bob Full, some of you may know Bob Full, I think at Berkeley. He looks at a lot of things, especially the cockroach work. He has a lot of cool work on cockroaches. And he also has this paper which-- let's see if I can find what page this on. Sorry, here. There we go. Is that clear up there? Ooh. Sorry. So here, you can look at the-- oh, let me dim the lights a bit. If you look at the plot on the left, this is speed and the stride length. So if you plot speed and stride lengths-- here they've got gerbils, dogs, and camels, but you can do this for a very wide variety of systems. You can see that they're all bounced over the place. You don't have-- I mean, they're all behaving very differently, so that's not necessarily the right way to look at them if you want to see what is fundamentally similar between them. If you look at the right, this is the [INAUDIBLE] number. In this case, you can see it's just the ratio-- well, it's 2 times the ratio of kinetic energy to potential energy. So it's just this non-dimensional quantity. If you plot this and you look at the relative stride length, which is-- you can see they all collapse very tightly on this line. And so there's actually-- that's impressive agreement. You see that these are-- camels, dogs, and gerbils are all-- look very similar when you look at them in this non-dimensional way. So all these systems, orders of magnitude, again, of difference in masses, stride lengths too, probably, all look very similar when you compare them in the right framework. And that's what comparative biomechanics tries to do, in addition to other things, obviously. So here's something that's really cool and that connects us back to our SLIP model. If we look at the next page, you'll see right here, here in-- this is the relative spring constant for individual individual leg [INAUDIBLE] systems, cockroach to kangaroo, five orders of magnitude. You can see that they're all pretty similar. This is huge spectrum of masses, and yet in the framework of the SLIP model's spring constant, they all behave pretty similarly, humans, all these things. And so if you look at these systems through this simple model, they all not only look similar to one another, but their dynamics and their center of mass [INAUDIBLE] are pretty well described by this kind of behavior. Sorry. By that kind of behavior. So hopefully that will be somewhat compelling argument that this model is representative of actual running behavior, not just something that comes out of nowhere. And a lot of work was done on this in the '80s and '90s on trying to study this model and connect it to animals and that sort of stuff. I don't know if you've seen Raibert's work, but he has robots that are very similar to these sort of SLIP model that run and jump and actually are from the '80s and such like that actually are some of the really cool robots. And I'll show you some of those. So he actually didn't even call them SLIP. He actually made them before SLIP, and that motivated some of the work on SLIP models. But I can show. Here, I'll show you a picture of a robot so you can see the kind of cool stuff that we're getting to. I don't know if-- are many of you familiar with the Leg Lab? They should be around. Yeah, they had a lot of cool robots. And here, see if I can give you just a little picture [INAUDIBLE] Ah. Oh, so that's the biped robot. This thing's bouncing, but that's not really the picture I wanted to show you. Ah, here we go. Yeah. So this little robot, you can see that's tracing its foot. This up here is tracing its center mass. And you see how center mass actually compresses right above the foot. And that's what you expect here. It lands, and it squishes down. And so instead of when you're walking and you vault over this leg-- like you think about that rimless wheel. Its center mass rolls up over its foot. This one, both intuitively in the model and you can see right here this robot, which is similar to the model in many ways, is squishing down into its foot, all right. And that actually is one of the ways of looking at the difference between what is running, what is walking. So one of the definitions for running is that you're supposed to have an aerial phase. But not every running animal has an aerial phase. So this is-- actually, I think comparative biomechanicians look at this center of mass to center of pressure trajectory, because-- I don't know if you know Groucho Marx. Groucho Marx had this funny run. Maybe I'll do it. So [INAUDIBLE] but sort of like this-- you know? That kind of goofy thing where your feet aren't coming up the ground? That doesn't seem natural. But actually, some animals do that. There's actually elephants that do that Groucho run. So maybe if you're big enough, it makes sense. But yeah, so [INAUDIBLE] this is very different from walking. And it's actually quite similar to running, and actually, if you look at these animals, it captures a lot of their dynamics. So and another important thing is that these kind of robots which came around in the '80s were the similar kind of work in robotics, because again, the same time [INAUDIBLE] these slow, careful walking robots. Again, here are these robots that are flying through the air. I'll show you some videos later, but they did flips and all kinds of crazy stuff. I mean, they're throwing themselves around wildly, which is not what people thought of when they thought of these legged robots. Oh, and the other thing-- this is cool too. If you look at animals, like we're talking about this springs and stuff like that, that's not just some sort of obtuse model that comes out of nowhere. If you look at horses and stuff, well, animals, actually, are full of springs. Their tendons can store up energy and stuff. And actually, horses have a big tendon that runs through their leg all the way up, I think, behind their hip. And apparently, one of the limiting factors of trying to run fast is that you can't pull your leg forward fast enough to get to the next stride. And so what they actually can do is they can preload when they're pushing, and then that tendon actually acts like a spring. It actually stores energy and lets them swing faster than they would be able to if they just used the motor. And if you look at the oxygen intake of these animals, you can see that they're behaving much more efficiently than you could hope to accomplish if you just had-- if you were simulating a spring with a bunch of motor-- with your muscles acting like a spring. So it's actually they're physical springs. Actually, this tendon in there actually have this actual behavior. So hopefully-- maybe I'm dwelling on that point too much, but I think that's really cool, that these animals actually have this kind of spring behavior. You can see it in their dynamics, and actually, this model captures a lot of that. Oh, and here's one other thing that Russ [INAUDIBLE] and it's pretty cool, too. If you think about-- this is sort of tangent, but if you think about how birds sit on purchase forever, right, I mean, it seems like that'd be tiring to be able to hang there forever just squeezing, right? But they actually have tendons, too, such that when they sit on the perch and their weight squishes their legs down, it actually will clamp their talons in and let them hold on. So that's cool, too, that these animals have all these interesting passive structures and springs, stuff like that, that help them do all these kind of things. And so it's not like the, just use your muscles and just dominate, all these kind of things. There's a lot of passive structures that do a lot of this for free. So hopefully, you all think that's awesome. All right. All right, so getting back to the system. Looking at this-- sorry-- at this model again-- did you see where I put my chalk? All right. So going back to the SLIP system-- let's see. Again, the state space here-- is it-- yeah, it's still dark in here. So going back to this system, again, we have r theta, r dot, theta dot, and in the aerial phase, x, y, x dot, y dot. All right, I'll bring this one back down. And let me put in the assumptions really clearly of what this model is going to have, so the assumptions for the SLIP model. So one is that you have a massless-- whoa, that's terrible-- massless leg and toe. All right? So all your mass is concentrated in that [INAUDIBLE] at the body. You have an ideal lossless spring in your leg. And what that means, that effectively, your collisions with the ground, unlike all the walking models, are perfectly elastic. All right? And now you can see that that's not an extra assumption that is derived from these. When it hits, even though this toe sticks in right away and [INAUDIBLE] inelastic collision with that toe, because there's no mass there, there's no energy, there's not momentum in it-- sorry. This is [INAUDIBLE] you don't lose any energy. And so it's actually conservative as it runs through. And then if you-- it's flying through the air, there's no drag or anything, that's conservative when it flies through the air as well. And so this system actually is completely conservative in its full operation. So yeah, so it's very different than a rimless wheel. And also, then the other assumption we have to make is that the leg instantly goes to the desired theta and rest length. So again, when it's massless, the rest length, it will do that automatically. But you have to assume that [INAUDIBLE] teleported to that theta. So you don't worry about collisions with the ground or everything like that. Your leg just goes to the theta touchdown. So the moment you take off, it's in this new configuration. So now what we have to do is, now that we have the system and the model, we want to pick our surface of section, because our goal here is to turn this into a 1-D iterated map, right, because that's what this is all about. We're going to try to figure out, how do we pick a spot where you can just look at one section and describe all the dynamics, just being like, OK, we're here. Simulate to the next one. And then we can just iterate this map and figure out the fixed points, figure out a lot of things, as we did with the rimless wheel. So does anyone have an idea of what a good surface of section would be? No? What special configurations are there that we can look at? AUDIENCE: Take-off. PROFESSOR: Take-off. What else? That's one. AUDIENCE: Full compression. PROFESSOR: Full com-- AUDIENCE: Or rest length [INAUDIBLE] PROFESSOR: At the rest length? That's true. Full compression you could do, but that would be like there'd be a 0 r dot or something like that. But yeah. No, I mean, you could. There's a lot of these special configurations. But the one that you really want to look at, the one that collapses things down the most is if you look at the apex. So you look at the maximum height of this flight. So we can just look at that y. And the reason we can do this I can describe right here. So we look in the flight phase. Now, first of all, x, we don't really care about what x is for the stability. Doesn't matter where it is along that position. That doesn't affect the dynamics directly. So we don't care about x. So y, we definitely do care about. This matters. x dot, well, x dot, because we know it's conservative and we know that since at the apex y dot equals 0, then x dot is purely a function of y, right? We can look at the total energy of the system, which is 1/2 m V squared plus mgy. We know y dot is 0, so that means that V is just x dot. And so you can write x dot is a function of y. And so here, we know this is 0. We don't care about this. This one we can find directly from y. And so everything that happens between one apex, another apex, we know if we just know y. You see that? Yes. AUDIENCE: We're always adjusting the angles [INAUDIBLE] landing so that it is vertical at the [INAUDIBLE] PROFESSOR: It's set at whatever desired angle we're at. So there's some nominal angle we want on this touchdown. So let's say it's 30 degrees or whatever. And so that means that then you take off and your angle-- your leg goes to 30 degrees, and then you hit there. And so the touchdown angle is always the same. And so there's no control in this at the moment. It's just the passive stability of this balancing system. Does that makes sense to everyone how we can collapse all these things? AUDIENCE: Professor? PROFESSOR: Oh, yeah. AUDIENCE: [INAUDIBLE] why don't we care about x again? Because-- PROFESSOR: Because that doesn't affect the stability of the system. So if we're trying to get somewhere and hit a target, yeah, then we have to look at x in our controller. But because the x doesn't figure in the dynamics anywhere, right, doesn't matter, because again-- sorry, this is something I probably should have mentioned in the beginning. It's not going downhill or anything like that. So how far it goes in x doesn't matter. The dynamics are going to be the same invariant to that, if that makes sense. And so because of that, I mean, x will be changing, but it won't affect the next apex height, because it doesn't figure into the dynamics like that. Does that make sense? All right. So we can represent the whole thing just using y. I guess that's the important thing to realize. So what do I want to erase? I'll erase this stuff. I'll throw this back up. Hmm. This isn't ideal. All right, this is not going to get better than that. All right. So we just go through this transition and we go from apex to apex, we can look at is we have y apex at n. Then we need to figure out how that translates into the y, x dot, and y dot at that apex. All right, so we need to map this one piece of information into all of these. And then we can map that into the y, x dot, y dot at the touchdown, so when it comes in and hits. Now we have to change our coordinate system to r theta, r dot, theta dot. This is still a touchdown, and we swing that through in the stance phase. Then we get that to r theta, r dot, theta dot at take-off. So my D and my O are as different as I can make them look. Then at take-off again, we switch to our aerial phase. We go back to y, x dot, y dot at takeoff. And that brings us, then, to a y, x dot, y dot at apex. And then here, of course, we grab y n plus 1 at apex. [PHONE RINGING] So these transitions, I'll number of them and make this a little bit clearer. That's one, two, three, four. Sorry. So all right. And the key now is to figure out how to push our system throughout all these transitions so we can figure out what this mapping is going to be. So some of these are pretty easy to do, and some of them can be quite tricky to do. But all of them are quite manageable. So what you look at first is the energy, which is our kinetic plus potential, again, is 1/2 mv squared plus mgy. So at the first step, if you want to convert-- if we want to take y apex and turn it into these guys again, we can have the energy of the apex we know is 1/2 m x dot, again, because we know y dot 0, plus mgy is equal to a constant. And so then this means that we can say, all right, x dot equals this function of y. So we can figure out what x dot is, [INAUDIBLE] and then we know also-- sorry. We know y dot equals 0. And so here, then, that gets our first transition. Two then, if we want-- we know that x dot at touchdown-- so this is the ballistic phase coming down from apex-- is equal to x dot apex, because there's no forces on it, so it's just going to keep carrying forward in x. And then we know y touchdown is going to be r naught cosine theta naught. So this is what I was talking about with there's the desired theta. So your touchdown, you know you're going to touch down at this desired theta. And this, then, again, since we know our energy, we can get y dot touchdown, all right. So here that gets our transition from apex to touchdown. Now this is just a coordinate transformation. It's pretty easy to do. Just make sure you get your velocities right. You have to decompose them properly. But you can just transform those coordinates, and then you get that without too much difficulty. But then you get to the tricky one, and the tricky one-- [INAUDIBLE] The tricky one is the stance phase. So how do you push yourself from when you come in and hit to squishing down and launching back out? So that's the involved part of this. So four is stance dynamics. AUDIENCE: Is that y a touchdown [INAUDIBLE] PROFESSOR: Pardon? AUDIENCE: y [INAUDIBLE] PROFESSOR: Oh, yeah, this is touchdown. Sorry. So to get to two, to get this falling phase, you know x doesn't change. There's no forces in the x direction. y touchdown, you fix this angle. That's instantaneous warp of the leg. So you know when it touches down, it's going to be this high, because that's when touchdown's defined, is when the toe hits the ground. And then here, you know your height. You know this. And again, conservation of energy will give you this at the y dot touchdown since that just allows you to figure out your state when you come in. So stance dynamics. So here, you've got a idea of what you expect the system to do. You have a spring. And if you're running in steady state, I believe you take off the same angle you come in. But so what you can imagine happens is this thing swings around like this, but the center of mass goes in and comes back out. So again, it's doing that compression and then launching itself, all right. So I think it's a assumption Raibert did for the steady state operation. It comes in, out at the same angle, and serves as a compression in the middle. And so again, this kind of behavior is, again, one of those ways of defining running, like for the Groucho kind of running, is that you see the center of mass come in to the center of pressure, as opposed to vaulting over it. If this was a stiff leg, it would go like this. So that distinction is one way of defining running. So we can define take-off, so we know our final condition. So take-off happens when r equals r naught. So when you get back to your rest length, you assume that your body then, it goes back to the ballistic phase. All right, then we can write down our energies here. Now again, we're in the polar coordinates now, so our energies look a little bit different. Our kinetic energy looks like that, and our potential energy is this term, and what else do we need? What else? AUDIENCE: Spring potential. PROFESSOR: Spring, yeah. So it's k over 2 x-- sorry-- r minus r naught squared. And so using that, you probably remember Lagrange. It's not that bad of a system. You can get the equations of motion. The equation of motion here, you get-- oh, sorry. And so you have all your expected terms, centrifugal terms, Coriolis terms, all that if you try to make sense of what this is. And so if you want to get from touchdown to take-off, you can't get a closed form solution from this. You can simulate pretty easily, obviously. But you don't get the closed form, so you can't get this analytical kind of mapping. But if you use a small angle and small displacement approximation, you can linearize the system, and you can get a closed form solution for this touchdown to take-off phase, all right. And so that's this assumption that theta is much, much less than 1. Your delta r over r0 is much less than 1. And so then you can get to a closed form solution. It's kind of ugly. It's not really ugly. I mean, it's not so ugly as to be prohibitive, but I'm not going to write it down. But that lets you actually get a closed form return map, all right. And something cool about that is that you get these two fixed points. You get one stable fixed point and one unstable fixed point. And-- hmm. And actually, a number of people thought that a system like this, this conservative system, shouldn't be able to get that sort of stability, because if you remember in the rimless wheel, the energy loss was critical towards achieving the stability. When you went faster, you hit harder, you lost more energy. And when you went slower, you didn't lose much energy, so you were able to speed up as you went down this ramp. So the fact that you can achieve stability in this system is kind of cool, too, because it's completely conservative, yet somehow, dynamics are able to actually push it towards a repeatable state. [INAUDIBLE] an then looking at-- we're going to see this return map. This is from [INAUDIBLE] Geyer's thesis work, I believe. And it's pretty cool, if I can get to it for you. There we go. There we go. So this is actually the return map for this small angle approximation you can get analytically. And you see there's this unstable fixed point, and then down there, there's that stable fixed point. So you zoom in, and you can see the stability region, that that unstable one is defining the boundary of the stability for that. The apex height, why does it get to that minimum there? Why doesn't it go below that 0.87? Anyone know? Why don't we look at it below that? AUDIENCE: Because that's the maximum range? PROFESSOR: Yeah, that's your height at the cosine theta length of the leg. And so you can't get below that. And then the weird thing is, and this is something I was talking about with Russ just before he went home, was why does it keep climbing like that? AUDIENCE: Why does it keep climbing [INAUDIBLE] PROFESSOR: Well, because this is conservative. So how can you have an instability that pushes you up to larger and larger apex heights? AUDIENCE: [INAUDIBLE] to a more vertical-- PROFESSOR: At some point, though, it should cap. There should be another stable fixed point bouncing straight up. There isn't. The thing is that, what we think is that that unstable one right there is your vertical bouncing. And the thing is that, actually, when you linearize these, they don't quite conserve energy anymore. This actually can be a non-conservative term. So we think that's what this is from. So it really shouldn't be climbing up like that, because when you simulate these things, you don't get above that second fixed point. It rolls up to this unstable one. And so then-- but looking at this again, now here's another strange thing. It looks like it should be globally stable, then, right? Because if in practice we can't get above that unstable fixed point-- and if you look at this apex axis, right, the higher it is, the slower it moves. So at your minimum apex height, right, that's when you're moving fastest. It looks like it's stable throughout the entire operating regime, right? The fastest it can move is that little guy on the far left where it's apparently stable. And the slowest it can move is this vertical bouncing, [INAUDIBLE] fixed point. So how do you explain that? AUDIENCE: [INAUDIBLE] PROFESSOR: Well, that's the thing is that it-- that's pretty much exactly the issue is that your failure mode can come from coming in, squishing really low-- I'm going to fall in my face here, so I should let everyone see it. You can compress and get really low, and then you bounce forward. And if you bounce forward at such a directory that you never get high enough for your leg to be out, touch it down, then you're just going to bounce forward, land on your face. You can't actually warp your leg. Does that make sense? So that can be your failure mode. So you bounce in, you shoot across, and then if you never have an apex high enough, you'll never get your leg ahead of you. So that's the failure mode. So that's actually then what we see when we simulate it. And the simulation actually looks a little bit different. There's actually-- this guy looks pretty good, but if you simulate it and you have slightly different parameters, that curve there comes down a little bit lower, and so you're actually able to be from the top and bounce in and overshoot and go unstable. So it's not like you're guaranteed stability by starting to bounce vertically, either. So this will change with different energies, different parameter settings, and everything like that. But the takehome thing I think that's cool is that, even this simplified perspective of it, you get this stability in this conservative system. You get this stability that you can see in the simulation, and that even though it's not dampening on any energy or anything like that, is able to bounce along and right itself. And apparently, what that's related to-- apparently, people didn't used to think that was possible. But the coordinate transform is actually enough to get that kind of stability, is that-- it's called piecewise holonomic. And by switching these coordinates, you're actually able to get the-- that allows the dynamics to stabilize themselves, apparently, as opposed to a normal holonomic system apparently couldn't actually converge in back towards this fixed point. If you think about the pendulum [INAUDIBLE] like that, that's not going to converge [INAUDIBLE] It has to conserve the energy. But apparently, with piecewise holonomic systems, it's available. But you can read some papers and get into that. But I don't know much more than I just told you. All right, so here's another thing that's interesting, is that if you look at-- if you want to model those Raibert hoppers, right, they obviously aren't completely conservative. They do have some mass in the toe and in the leg, and so when they hit, they do lose some energy, because when that toe hits and sticks to the ground, it's going to be a dissipative reaction. So you can look at this model by, I think, [INAUDIBLE] and Buehler in '91. I'm going to use my big chalk again. Ah, and here's my old big chalk. So looking at this. So here we have a toe mass. So we have the mass of the body and mass of the toe. And this spring can possibly be non-linear if you want. And this was largely-- they did some analytics, but a lot of it was computationally driven, the analysis they did of this system. And so the thing is, because of this hit here, you have a loss due to the toe mass. So must add energy. And I do this with a control on the spring. You can vary your spring constant like that. You can add energy to the system. And so the control in the spring is their only control then. And you actually can get similar stability properties to the SLIP model. You can get these when you simulate it. You can find the same stable fixed point, unstable fixed point kind of behavior. And so this is all what we've been talking about. The image we have is this sagittal plane dynamics, right? The sagittal plane is this plane, and so this idea of running through like that. Getting a little bit aerial phase so I look less ridiculous. And so the other thing, though, is that it's good for the lateral plane. If you look at the lateral plane dynamics, certain animals, the same kind of SLIP model and that kind of behavior can actually be witnessed. So this is, again, something Bob Full spend a lot of time on. If you look at a cockroach, which has a tripod gait-- here's a cockroach. I wish they actually looked like that. Probably wouldn't be as disturbing. There. And so what you have is, they use three legs at a time, and they're springy legs. As it runs, if you look at the dynamics, you actually can see the-- you actually can see that these legs are acting like springs. And actually, the same SLIP behavior you'd expect, you witness in the cockroaches. And so there's some cool things here. Let me show you. So these-- have any of you heard of preflex, as opposed to reflex? It's like pithy kind of name, I guess. So the preflex is where actually, you can actually look at the time constant required for the monosynaptic reflex, which is the electrical signal to go through and actually to the spinal cord and back. So not to the brain and to the full path, but just the quick reflex kind of response. And actually, some of these creatures actually respond faster than that. So the theory is, is that it's actually a musculoskeletal response. So it's not even-- it's not controlled at all. It's not going to the muscles or anything like that. It's tendons. It's the springiness in the leg. And that is actually what's providing some of the control here and some of the stability. And so there's a really incredibly awesome video that I am definitely going to show you, regardless of how inconvenient this thing comes out. AUDIENCE: It's not an anticipatory neural. PROFESSOR: It's not. And that's because you can actually-- can perturb them almost instantaneous, slam with perturbation. And so they can't anticipate this. And you actually can see, within one step, a cockroach step, they'll spring back, and their center of mass trajectory will get back on track. And so it's not like they see something coming or there's-- they're going to step something that's going to hit them. It's that there's just-- they're running along, a perturbation, and then they start bouncing right away. I'm sorry about this. I don't know what's happening here. Yeah. I'll have to-- [INAUDIBLE] AUDIENCE: So in that model, is it now the spring is actually [INAUDIBLE] Because now that you have x, y, z, and it's at an angle, [INAUDIBLE] PROFESSOR: Yeah, I mean you-- AUDIENCE: Is that how you do that, or-- PROFESSOR: If you're doing the lateral plane, I think you have to have some sort of springiness like that that can push you through. But you can actually look at the cockroach running along the sagittal plane, and you don't need the [INAUDIBLE] You can-- AUDIENCE: Oh, [INAUDIBLE] PROFESSOR: Yeah, you can treat several legs if they're just like one spring in the tripod. If there's something called-- I don't know if you've seen-- you know Bob Full's work at all and these templates and anchors, where the anchors are the more complicated system, and the templates are these really simplified systems? So your SLIP little thing here can be like a template, which is a really minimalist model but captures some of the essential dynamics and does so in a concise way. And you can look at just a cockroach as following this kind of running. I mean, and it actually captures a lot of it. I mean, there's more complicated ones. If you look at the lateral plane dynamics, I mean, you probably need a more complicated model. But so-- AUDIENCE: What did you say this model was called again? PROFESSOR: This is a SLIP. Oh, but [INAUDIBLE] and Buehler came with this. So this is-- yeah, it has a mass of the toe. I don't know if it has a different-- I don't think it has a different name. Yeah, so let me get to this. There you go. All right. So first, the less exciting thing, I think, right here, but still pretty cool. So measuring the force on these cockroaches' steps, you could imagine, is difficult. Something they did, actually, to facilitate these experiments is they actually have these guys running on Jello. I think he set up diffraction [INAUDIBLE] on this jello. And so you see that, how it changes color when they're pushing? They're actually able to figure out the magnitude and the direction of the force to some level of accuracy by looking at that. And apparently, orange jello works really well. I don't know what it is about orange jello, but if you ever want to analyze the forces on a cockroach's legs, start with orange jello, if that's the only thing you get out of this lecture. But yeah, so that's pretty cool, but that's just analyzing when it's turning. They can figure out-- that's to look at some of the springiness and how the force response and the center of mass response connect. But here. This is one of the greatest experiments of all time right here. All right, so this is looking at the perturbation experienced by these cockroaches. So what they did, they bolt a cannon to the back, because pulling strings and stuff like that, it's not fast enough. It's nowhere near fast enough. This cockroach is running. Aw, come on. What's going on here? All right. Really sorry. It's running. Bam. Perturbation. The little cannonball actually hits it on the way back, you see. [LAUGHTER] That's not really fair. That's double perturbation. But they tracked the center of mass of this guy, and actually, you could see it gets back on in less than a step. And if you look at the time scale of this and the time scale of their monosynaptic reflex, it happens too quickly. And so they think it's actually compliance in the legs and everything like that that's just passively tuned such that it gets banged to the side and rights itself. AUDIENCE: Will you put this on the course website? PROFESSOR: Actually, I think-- I mean, it's Bob Full's video, but he may have it available. But a great thing is that the guy who did this thing-- Devin did it. And he said-- and this is all he said, so he didn't go through the full thing. But he's like, you'd be surprised by how little gunpowder is necessary to perturb a cockroach. [LAUGHTER] And so you could only imagine the first cockroach they got out there. And they're like ah, this is about the right amount and just blows the cockroach up or launches it across the room or-- so that would have been a fun trial in an experiment to watch from a different room, I think. Yeah, we'll watch this one more time. AUDIENCE: [INAUDIBLE] PROFESSOR: [INAUDIBLE] But you see, that's just a fantastic little experiment. And the perturbation, as you see-- I mean, it doesn't know that's coming. That hits it like that. It just responds almost instantaneously. So I think that's really cool. And that shows this-- I mean that's SLIP in the lateral plane. And it shows that these springs and this compliance in the actual-- in the animal can actually do things that just a control couldn't do, that the time scale of the response can be faster than control could achieve. I don't know if I have-- yeah, I don't think I have these [INAUDIBLE] So does anybody have a question? Yeah. OK, so that's nature having this spring bouncy compliance. But you can see Raibert's hoppers, back in just the '80s and '90s, did amazing things, too. If you look at-- let's see, where's this 1-D monoped? There. Some of these videos are pretty crappy, but there. So that's just a little monoped running around. Do you see? And I mean, it's constrained to be in the plane. It's on a boom, but it could fall down forward and everything like that. It's not like it's free to roll. So it's achieved the stability bouncing around that circle. But thought they could do more than that if you look at-- they built bipeds. This is-- I think, yeah, Russ grabbed this from a VHS tapes. So this is actually what's funny is that I think this is the fastest biped around, but it wasn't quite that fast. That's skipping frames, but I think it's still-- unless something has changed in the last year. It runs like 2.2 meters a second, which I think is actually still the fastest biped. But Russ would know if anything has changed. AUDIENCE: Is it spinning around? PROFESSOR: Pardon? AUDIENCE: [INAUDIBLE] I'm sorry. It looks like it's spinning around but it's not [INAUDIBLE] PROFESSOR: Yeah. Oh. But check out that. So here-- oh, come on. There. So they figured they could do more. Here's a biped. So this one isn't on a boom. Bam. Check that out. Look at this one more time. Running there, [INAUDIBLE] speed, like gymnastics. That's like a robot doing a front flip. Then here, this is pretty impressive, too. Yeah. Right over those stairs like nobody's business. AUDIENCE: Is there a [INAUDIBLE] PROFESSOR: If what were out of phase? If the-- AUDIENCE: [INAUDIBLE] PROFESSOR: If the stairs were out of phase? I don't know. You could probably imagine setting up stairs that would make it easier and stairs that would make it harder. So yeah, you probably could come up with something that would be problematic if it had to hop. Well, maybe could hop on one foot for a while. I don't know. It's on a boom. But yeah, so I mean, these are-- I mean, even now, you can look at these things, and you're-- I mean, they would be amazing if someone just achieved this, but these were back in the '80s and '90s that people did this. And so I mean, you've probably seen Big Dog. Raibert worked on Big Dog, too, and it's like 20 years later than this stuff. And it's still state of the art, that kind of control and similar control in a lot of ways. We can do some more of these. Oh. Oh, there's one that's-- let's see. and the thing is that not only do these guys do these crazy things, but their robustness is really pretty incredible. I mean, a lot of these, like the Honda robots walk on, carefully, just ground and everything like that. Here. [INAUDIBLE] towing him along. It's probably not the most comfortable. It looks like it's a little bit less smooth. But I mean, it's running along sidewalks outside of MIT. They have-- this other one's [INAUDIBLE] just running by itself on grass and everything, too. Oh, the quadrupeds are pretty cool, too. That's a quadruped running down the hallway. And they actually did a cool paper called "Four Legs Running as One," "Four Legs that Run as One," something-- "Running With Four Legs as if it Were One"-- there we go-- that used the same kind of control ideas but actually can get these quadrupeds and actually run based on the same ideas. And you see that thing's moving pretty fast and pretty robustly right down hallway like that. So it's really pretty amazing capabilities, even today. There's one I really wish I could find for you. Yeah. And so interesting thing that some more recent work, they've-- oh, let's look at [INAUDIBLE] Yeah. Check that out. Isn't that just amazing, right? And actuators that these guys use is hydraulics at the hips and then pneumatics along the leg, so {?_{?_[FFFT]_?}_?} with a pneumatic. Need that so you can rocket yourself that far. But it's just incredible. Yeah. So something that people are working on more recently is this thing called SLIP walkers. So you can imagine if you have a compass gait with springy legs, right, as it's spring constant gets very high, it starts behaving just like a compass gait. If the spring constant goes to infinity, it's rigid, it's going to be just like a compass gait. If you let those strings get looser and looser, it's going to start being more like one of these bouncy bipeds, right, more like this SLIP behavior. And so actually, you can get stability properties-- I mean, it can be stable through a relatively broad range of bounciness so that the same robot can vault over its legs and walking and then start bouncing and start running. It actually could do that transition. And that's a cool anything also that you see. I should have brought this up at the beginning when we were talking about biology. But those transitions are interesting, because if you look at the efficiency of human locomotion-- so you can measure O2 consumption, and you can see how efficient it is to walk at different speeds and to run at different speeds. You can look at the efficiency curve for-- I believe this is speed, and this is, let's say, efficiency. And so you have something with walking and some curve like this. And then running, you have some curve like this. I mean, quantitatively, these could be wrong, but this is a qualitative feature. So if you look at the efficiency of a person walking and you just put them on a treadmill and just tell them, start walking, speed it up, speed it up, and tell them just to transition to running whenever they want, they'll transition right here. That switches. And so you know the feeling, I'm sure, where you're walking faster and faster and suddenly it just feels more comfortable to just start jogging. Apparently, that happens when the efficiency of that motion outweighs that of walking, because obviously you can walk where it's uncomfortable to walk and run where it's uncomfortable to run. But the natural transition happens at that efficiency, bifurcation. AUDIENCE: [INAUDIBLE] PROFESSOR: Pardon? Yeah, that should be cost. Now it's even more skewed. There we go. Thank you. Yeah, so [INAUDIBLE] SLIP walkers, I think. Hopefully, you can get some of this in transition, but I'm not sure exactly where they are right now. I don't know if they have any videos of those. There's some papers about them. And I think that's all the stuff that Russ wanted me to convey. So we just have some idea of this culture of these cool running models and these springy things and these preflexes and stuff, and in nature, the fact that springs are critical to locomotion of horses and the stability of the cockroach running and everything like that. So if anyone has any questions, I'd love to answer them about anything. AUDIENCE: When you were saying how they were-- the transitioning using the spring model between walking to running-- PROFESSOR: Oh, the SLIP walkers? AUDIENCE: Hmm? PROFESSOR: The SLIP walkers? AUDIENCE: Yeah, the SLIP walkers. Is it still [INAUDIBLE] elastic collision, or do they [INAUDIBLE] PROFESSOR: I think in practice. I think they have idealized legs. I think they've done simulations idealized legs, too. But you can do either way, I'm pretty sure. I haven't looked at that work as closely as I probably should have. But yeah, for the walking, vaulting behavior, you need either a toe thing that's going to-- or you can let your spring constant go to infinity. And then you'll have a rigid impact, right. So you can treat it that way, too. But then yeah, if you loosen it up, then you're not going to have the intermediate still lossy behavior. But yeah, I think they have done stuff with idealized legs and probably non-idealized ones, too. So anything else? AUDIENCE: If you get it just right, can you adjust the spring constant so that you'll get the push off right before landing? PROFESSOR: Oh, you mean toe off, kind of, or the toe-off thing that pushes you forward that [INAUDIBLE] efficiency? AUDIENCE: Right. PROFESSOR: I think you should-- I mean, I'm not sure about how the tuning would relate to the walking things. But when you have these idealized legs with the spring, I mean, you can't be more efficient than if you're vaulting over them. So I don't know if it connects to the-- I mean, it seems like it could connect to that toe off launch, but I'm not sure if that is explicit, just by having your leg not be a strike with the dissipative inelastic collision is going to help you. But I don't know if you get the same kind of-- yeah, it seems like you should, but I don't know if that exact point-- I haven't seen that exact point. But it could easily be addressed in other papers. Yeah, I should-- I don't know if I have the references right now, but I can make sure that Russ gives those to your or get those to you myself next lecture if you want to look up some of the SLIP walkers, because that's pretty new stuff, I think. Yeah. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_14_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: OK. Welcome back. Well, sorry, I wasn't here. It's more dramatic for me, I guess, than if I had been here on Tuesday. I hope John did a good job covering for me. I'm sure he did. So you've learned about walking robots. You've learned about lots of robots. You learned a handful of very powerful techniques, I think, for designing motions for these robots. Some of my favorite these days are things like the dir call type methods. So let's consider the problem of taking some robot described by our standard, some non-linear dynamics form, and taking that robot from an initial condition x0 and getting it to some final condition. I don't even care about the time. So you're just getting it to some other state, x goal. So what tools have we already got that could help us drive this system from some initial condition to some goal state? The answer is a lot of tools, right? If the state space is small enough, you could imagine designing an optimal control cost function that we could do dynamic programming on. That would get us to that goal if we only rewarded ourselves from being away from the goal. We could do a shooting method, right? If we're using our SQP, our SNOPT type interface to the shooting methods, then we could provide a final value constraint, which says I want the endpoint of my trajectory to be exactly at the goal. And that would solve this problem in some cases. But both of those methods have problems, right? The DP doesn't scale. The shooting methods scale nicely. But if you ask your shooting method to drive you from x0 to x goal, there's some chance it's going to just say, I can't do it, right? I can't satisfy that constraint because it's based on a local gradient optimization. And if there's local minima, SNOPT will just say, I can't satisfy the constraint x at time final equals x goal, happens a lot. If you're looking at hard enough, interesting enough, problems, it'll happen to you, OK? So what do you do? Do you just say, oh, find a new research problem, tell my advisor I've got to find something else? OK, there's more things you can try. The methods I want to talk about today are the feasible motion planning algorithms. So motion planning is a very broad term. It's used all the time in robotics. I'd say roughly everything we've done to date has been a motion planning algorithm. It's like saying it's a control algorithm. It's a very general term. Some people debate whether an algorithm is a motion planning algorithm or an optimal control algorithm or whatever. I think that's sort of a waste of time. I would say they're all motion planning algorithms. Most of the things we've talked about so far I would call optimal motion planning, where we actually had a cost function flying around to do it. Oftentimes, motion plans go from a start to a goal. Oftentimes, they're open loop, but that's not really defining. You can do feedback motion planning. Motion planing is just a very general term saying I'm designing the motions of my machine. The reason I want to use motion planning in the title for this lecture and next lecture is because there's an important class of motion planning algorithms that are not covered by the optimal control algorithms. And that's these feasible motion planning algorithms. So the optimal motion planning algorithms try to get me from x0 to x goal in some way that's good as scored by some cost function. Feasible motion planning algorithms aspire to do less. They just say, I'll get any trajectory that gets me from the start to the goal. Because they surface their claims on optimality, oftentimes these algorithms will work for more complicated systems. So the message I want to sort of deliver today is that there are a lot of good feasible motion planning algorithms out there in the world. We'll talk about some of the most exciting ones, I think. And actually, even if you care about optimality, even if deep down in your core you say this is a cost function that I must optimize for my life to be complete, you still should care about feasible motion planning algorithms because they can do something like seed a dir call method, let's say, and get it out of local minima. So let's say I have an Acrobot that I'm trying to swing up. And I've got a table right here, so I can't swing that way. And whatever my initial guess at the trajectory, the system keeps banging up against this. It can't figure out a way to turn around and go back the opposite way to get to the top. Or maybe there's a table that you have to go through just at a certain way. I would say, if you're shooting method is having trouble finding its way to the goal, maybe you should back off on optimality. Just find any old trajectory that gets to the goal. And then take that result of that motion planning algorithm, hand it to dir call, and let it then do the rest of the optimization. We'll say more about this. I just wanted to give you some context about why we want to talk about this extra set of algorithms. In general, we're going to expect these to scale to slightly higher dimensional systems than the optimal control algorithms we've talked about so far and to be more global than the shooting methods. There's a great book on all of this stuff called-- I guess it should be underlined-- Planning Algorithms. So I just spelled planning wrong. So I'm completely sleep deprived. If I do stupid things like that, call them out. I'll fix them. Something about kid number two, the first kid stays awake while the other kid sleeps and vice versa. It just makes it that much more interesting. Yeah. And Planning Algorithms, the book by Steve LaValle is actually-- Steve's got it on his website, the full version. You can download for free. So it's nice accessible text that you can check out if you like it. OK. So before we get into our first feasible motion planning algorithm, let me give you just a little sort of culture. The motion planning algorithms, I think, actually grew up not so much in the controls community, but in more the artificial intelligence community. And there's a reason for that. There are serious people out there that think the only thing there is to intelligence is the ability to do search. So I think that artificial intelligence via search, to some extent, sounds ridiculous. But it's pretty hard to disprove and some people really think it. But that's the way it happened. So let me give you some cultural examples of where that came from. Some of the original efforts in artificial intelligence, they thought, my computer program will be intelligent if it can play a game like checkers or chess. And as early as the '50s, Art Samuel was building checker playing robots that are basically doing search in order to try to figure out how to beat their opponent in chess-- or in checkers, sorry. Checkers-- Samuel in the '50s. Chess, we can leap forward to IBM's Deep Blue, let's say in the last decade. These are computer programs that, at times, can play with the sophistication of a trained human. And they're based on simply having the rules of the game programmed, some databases of moves that they've stored up, a lot of them in the Deep Blue case, and search. Finding the way to move my robot from x0 to x goal you can imagine being very, very analogous to taking my computer game and finding from an initial board configuration to a winning board configuration. They're exactly the same problem. So because of things like computer games, the artificial intelligence community really started building up these search algorithms. And they were very fundamental. The other big push, I think, from the AI community was in ideas like theorem proving. So an AI zealot of 4 years ago might have told you that mathematics would be obsolete by now. Because all you have to do is program the basic rules of mathematics into a set of rules that your computer program could evaluate, and then proving whether something was true or not was just a matter of finding the right chain of small logical deductions to put together to be able to prove that p equals np or something like that. OK. And actually, I joke, but that's actually a very powerful thing that works today. If you've use Mathematica or Maple and you've asked it to simplify, that's a result of theorem proving work that people did in artificial intelligence a long time ago. And again, it's a matter of designing the rules of the game, which is a collection, in that case, of mathematical primitives, and finding a path from your initial knowledge to some proof, let's say. But even going beyond that, there's people out there that-- my favorite instance of this is there's a guy, David Lenat, who's somewhere in Texas right now typing-- well, he's got teams of people typing random factoids into computers. He's got this project called Cyc. How many people have heard of Cyc? Yeah? So I think this guy believes that, if I can get enough people to type in things like, dogs chase cats and balloons go up or something-- I don't know, enough random factoids into this computer-- and he builds up an ontology of knowledge to some big collection of facts and a good search algorithm to back it up, that his computer will achieve intelligence. It'll be able to answer any query. It'll be completely indistinguishable from a human. You could ask it any query. And as long as it's got the right chain of factoids stored away and a good search algorithm behind it, then this thing is intelligent. And people have been typing things in for a really long time. You can go, and you can get the student version or whatever and ask it questions. And you can get a research version and ask it questions. And people use it. And I mean, Google, I think, works better because it's accessing WordNet, which is basically an ontology of synonyms and things like this. These sound crazy, but they're the backbone of some of the things that you use every day. OK. So for whatever reason, for these kind of reasons, the search algorithms, the motion planning algorithms that we're going to talk about, which are today used a lot in robotics, grew up under the computer science umbrella under artificial intelligence. So the term motion planning typically implies that you've got some dynamical system you're shoving around, let's say, but it's clearly just an instance of this search problem. When people talk about motion planning in robotics, there are problems that people care about a lot. A majority of the motion planning literature in robotics is concerned with kinematic motion planning, often with obstacles or something like this. So if I have a robotic arm with 10 degrees of freedom and I want to reach through some complicated geometry to turn a doorknob or to reach down and pick up some part from assembly, then that's a complicated geometric problem. Typically, you assume that the systems are fully actuated. You don't worry about actually handling the trajectory once you get there. That's normally assumed away. And they assume that all trajectories are feasible. They're just trying to find some path through the obstacle field. So one of the key problems that you can think about in that world is the piano movers problem. How many people know the piano movers problem? OK. How many people have ever moved a couch into a dorm room? You all know the piano movers problem then, right? Basic task is you have some 3D shape. You have a bunch of obstacles. You have to find a position and orientation trajectory that will maneuver your big piano through a world of obstacles. It's just like going up the stairs with your couch, you know, marking the walls and bending your couch as you go. And that's sort some of the driving problems in the motion planning space. So it's things like that, finding a collision-free path for a complicated geometry through an obstacle based environment, just trying to give you a little culture here. And in that world, one of the most important concepts, I think, is the configuration space concept, which actually-- you know who came up with configuration spaces? He's sitting right upstairs. It's Tomás Lozano-Pérez is, I think, credited with the configuration space idea. And the idea is that these problems can get a lot simpler. Let's say I have a square robot going through some complicated obstacle field. And I want to find a path from some start to the goal. Well, instead of reasoning about the geometry of this relative to the geometry of this all the time, the configuration space tells you you should actually add the volume of your robot to your obstacles coming up with configuration space obstacles. If you actually add the volume of your robot to the obstacles-- I'm doing this very hand-waving here. But then you can turn the problem of finding a big geometry through another geometric field to taking a point and driving it through the geometric field. And that's a critical, critical idea in the world of kinematic motion planning. We don't really care about kinematic motion planning in this class. We've got enough troubles without obstacles. We just care about having an interesting dynamic system to move around. So the things we're talking about in this class fall under the heading kinodynamic motion planning. And of course, the nonholonomic motion planning ideas are very related, too. We care about the dynamics. We care about things that are forced to follow some trajectories, like nonholonomic systems are, too. So if you're out there seeing papers and talks and everything by motion planning, I just want you to sort of see that they're certainly related to the optimal control things we've talked about. But sometimes they let go of optimality. They're often about kinematics, but there's a good subset of them which is thinking about exactly the problems we care about with this kinodynamic motion planning. Excellent. OK. So now, let's do some motion planning. Culture is there. When we did dynamic programming, I already sort of forced upon you the idea that we could potentially, at great cost, discretize our system and state and actions potentially. So let's do the same thing when we start off with motion planning and start with discrete planning algorithms. Let's say we've got our phase plot of our simple pendulum. For kicks right now, let's just start off with the trivial discretization, let's say. We're going to bin up our space like this, call each one of those a state and try to start talking about a graph-based representation of the dynamics of the system that tries to find a path, let's say, from our initial state to some goal state. This isn't the only way to do discretization, but it's a good way to start thinking about it. So that turns the problem into some graphical model, where the control actions determine our state transitions. I like to actually call them s and a in the discrete world. So people know a lot about doing search on a graph. I don't want to bore you with it, but I want you to at least know the algorithms that are out there and be able to call upon them if they become useful. So let's see we've butchered up our system like this, and we want to find a path from some starting discrete state to some goal state with a discrete set of actions. How can we do that? AUDIENCE: [INAUDIBLE] search algorithm, like A star. RUSS TEDRAKE: A star is a perfect example. Yeah. Dynamic programming is on the table, too. Dynamic programming, that's actually the way I-- the reason I drew that picture before is because we talked about that as a way to think about dynamic programming. In dynamic programming is incredibly efficient if what you care about is knowing the optimal feedback policy, how to get to the goal from every possible initial state. I mean, the problem with that is, if you have to look at every single initial state, then it's not going to scale the high dimensions because there's going to be a lot of states. If you have a slightly higher dimensional system and you want to find a start to the goal, but just a single path-- you don't care about every possible state-- then you can do not better than dynamic programming. But you can be more selective with your dynamic programming algorithm by doing these Dijkstra type and A star type algorithms. So let me just sort of do that quickly, so you know that that's there. So dynamic programming, again, it's very efficient. It goes to a global optima. Typically, DP is used to describe the version where you're trying to solve for every state simultaneously. You can do a bit better, you can be more selective, if you just care about from a start to a goal, like 0 to xg. It's actually a little bit surprising to say that I can actually find my path from a start to a goal and know that I'm at a global optima without ever expanding every single node. And then sometimes we don't even care about optimality at all. Like I said, some of these algorithms are just trying to find any old path to the goal, and then you can certainly imagine finding your path to the goal without expanding every possible node. They all have a very simple recipe. Basically, you keep a prioritized Q. How many people know what a prioritized Q is? I should say it's often called Q, but it means queue. It's another sleep deprivation thing there. So a prioritized queue is a data structure that we use in computer science that I'll describe quickly here. So we basically have a list of nodes that we want to consider expanding next stored in some list, but they're stored in that list in an ordered fashion, ordered by how likely it is that we want to check that node next. Depending on the ways that you add nodes into the prioritized queue, you can do any number of the standard algorithms, the breadth first search, the depth first search. Dijkstra's, let me make sure I spell it right. D-J-I, right? What is it? AUDIENCE: D-I-J. RUSS TEDRAKE: D-I-J? OK, good. Thanks. And you could do A star. There's more. There's iterative deepening, things like this. They all go basically like this. I have a queue of things I'm about to explore. I start off with the starting state. I put that in my queue. Next step of the algorithm-- take something out of the queue. Consider its children. And then take the first one out of my priority queue, repeat. If I consider the children in a first in, first out kind of way, if I add nodes and just say, as soon as I pick up a node, I'm going to go ahead and stick those nodes into my queue in a sort of a first in, first out kind of way, then what I'm going to do is I'm going to proceed to look for the goal by going here, then here, then here, then here, then here, right down the tree. That's a breadth first search. If I do a prioritized queue where, when I add the children, I just do a last in, first out, then I'll go here. Then I'll go here. Then I'll go here, and I'll go as deep down the tree as I could possibly do before I come back and go down to the other nodes. If I keep along, let's call it, a cost to come to differentiate it from the cost to go, if I keep a record of how much cost I incurred for my cost function getting to that node and I always select to expand the node that would be the cheapest, then that's called Dijkstra's algorithm. It's exactly equivalent to dynamic programming, but it's in the forward sense. And then A star is an even nicer way to do that, which combines a cost to come plus some heuristic. My goal is to make sure you guys know these are out there. There's lots of good easy places to read about these algorithms, especially Steve LaValle's book. The only surprising thing really about these algorithms is, first of all, that you can often very efficiently find your way to the goal without expanding as many nodes as you might think. A star, in particular, if you can find a good heuristic-- a heuristic, in this case, is an estimate of the cost to go, a guess at your value function. If you can guess your value function in a way where you always underestimate it-- again, I'm not going to bore you with all the details here-- then actually you can sometimes expand very few nodes and find your way right to the goal. They're very efficient algorithms. People out here use A star all the time. For the Grand Challenge, I think half the teams had A star planners running on their vehicle. For LittleDog, we're picking footholds to get from the start to the goal. We have an A star planner in there, discrete time planners. These things are real. They're good. They're fast. If you can find a good heuristic, they can be very, very efficient. If you care about a continuous time, a continuous state, continuous action plan that gets me from the goal, then they're not as satisfying because they relied on this discretization. So we know, when we're discretizing this thing, we're going to have some problems. I'm going to just assume that there's a control action that gets me sort of squarely in the center of this next discretized cell and then go like this. If I were to execute out my actions from my discrete planning algorithm on the continuous time system, I'm going to quickly deviate from my plan. But these things are an excellent way to start to seed something like a direct co-location method. Find yourself a feasible optimal plan. And then stabilize it, let's say, with the LTV feedback. Good. So that world of planning algorithms is out there. But about, I don't know, 10 years ago now, something big happened in the robotic motion planning world. People started using sample-based motion planning algorithms. And there was sort of a revolution in people using motion planning in robotics. It's also sometimes called randomized motion planning. And these are we are going to dig a little deeper in because we use them all the time. I think they're an important class of algorithms. So two of the most notable sample-based motion planning ideas are the rapidly exploring randomized trees and the probabilistic roadmap. Another poll, how many people know about RRTs? Awesome. How many people know about PRMs? OK. They're pretty related to each other. Let's talk about rapidly exploring randomized trees first. So both of these are an attempt to get away from these very coarse discretizations like this and sort of embrace the fact that we're working in a continuous state space. Instead of discretizing it at some known sites, we're going to discretize sort of at randomly chosen samples. And as we add more and more samples, we're going to worry less and less about the discretization. And eventually, we're going to have something nice to say about the continuous space. So the rapidly exploring randomized trees, the RRTs, are a very simple algorithm to explain. Let's think about it in terms of moving a 2D piano through a 2D world in configuration space. So we've got a point moving through a world. It should really be 3D, actually, if there's an orientation to the piano. But let's just look at the 2D problem here. So I've got a point, an official state, an initial goal in a 2D world. Let's call it xy. And I've got some obstacles. I'd like to find a path from the start to the goal without explicitly discretizing the space. The RRTs do it like this. Pick a point from random in the space. Try to connect your current tree, which in the initial state is just this x0, to that new point. Let's make it exactly clear that I can't do that. My attempt to directly connect this is going to fail because it goes through an obstacle. So I can't add that node. Sample again randomly. At some point, I'll get a node, like, here. I'm going to make a branch of my tree that connects those two points. Pick a new random point. Let's say you get something here. Now, you have to choose between these two random points, decide which is the closest. The closest isn't always an easy thing to decide if you have complicated dynamics. In this problem, it's pretty easy. We could just use something like a Euclidean distance. And I'll connect it up like this. I get another random point. I connect it up like this. I get some random points that go whatever way. If they end up here, I just throw them out. If they end up something that I can't connect, I'll just throw it out. But at some point, I'll start filling the space with these trees. Lo and behold, if I do it enough, it doesn't actually necessarily take that long until I get up here and get close in the vicinity of that goal, very simple random algorithm. Do you understand how it's sort of avoiding the curse of discretization? It's just picking points at random from this continuous space. So there's no constraints, no explicit constraints that it lies in the center of some bucket or something like that. Nodes could be arbitrarily close to each other. I can get arbitrarily fine sort of discretization. What's really cool, what makes these things tick, is that the idea is so simple. The code is so easy to write. And they really, really work fast. Let me just show you that basic example. This time, I'm going to do it even without any obstacles, but just to show you the basic growth of these kind of planners. Awesome. OK. So my initial condition is just that blue point. And I'm going to start growing a tree in every random direction. There's not even a goal in this problem. This is just to show you the basic growth of the tree. Every point I grow at random, every time I pick a point from a uniform distribution covering the space, I'm going to find the closest point in the algorithm, a closest point on the tree. And I'm going to try to grow my tree towards that node. Yeah. Sort of understand what's happening there? It's pretty faint, I think, but-- all right. So watch what happens if I just let that run. OK. Trivial algorithm, it has what a lot of people like to call a fractal expansion of the coverage area. Of course, if there's obstacles, it'll take a little longer to get around there, but the algorithm probabilistically covers the state space as I add more and more points. Some people call them dense trees because they do this. If you noticed something about them, they have sort of surprisingly nice qualities. It filled the space pretty uniformly pretty quickly. It didn't just go off in one corner and start adding nodes on one side or something like that. It actually has a property that people call the Voronoi bias. The Voronoi bias implies that the tree has a bias for always growing into the biggest open regions of the graph. And that just comes from sampling uniformly. If I have some tree that has a big, wide open area for whatever region, then with very high probability I'm going to pick a point that's inside the biggest region. And the biggest regions in the space are the ones that have the highest probability of being chosen. So probabilistically, this thing fills the biggest open spaces in the search tree just by the virtue of sampling uniformly. So you get these very, very dense, fast trees. And you can imagine, if there's obstacles or something like that inside there, it's going to work its way just around the obstacles. Yeah. AUDIENCE: Does it slow down as you get more and more nodes in there? RUSS TEDRAKE: Good. So what is the computation that goes into it? What's the expensive part of the computation would you guess? AUDIENCE: Finding the nearest neighbor. RUSS TEDRAKE: Nearest neighbor-- good. So, yeah, it slows down with the nearest neighbor, but not as much as you expect. Actually, there's two expensive parts. Only the nearest neighbor one shows up here. The other one, potentially, is the collision checker. And in some complicated systems, checking if you're inside a region, especially if you're a big robotic arm or something like that, can be actually the dominant, expensive thing. So these things, in MATLAB, everything's vectorized. It actually doesn't grow that badly with the number of nodes, but certainly the nearest neighbor calculation gets more and more expensive. By default, it's just checking the sample point at every possible node. AUDIENCE: You could use a k-d tree or something. RUSS TEDRAKE: Yeah, good. Yeah. So there are different sort of structures that you can use to make that look-up more efficient. So RRTs have kind of come in and done things to robotics that people hadn't done before. So this is a two-dimensional example. People would have said before, I can do sort of motion planning effectively and maybe 5 or 10 dimensions, a little bit more than I could do with just DP, but not a lot more. Because I'm still doing these sort of random factorizations. The guys that came out and started doing these RRTs-- LaValle being one of them, Kuffner being another one-- started showing examples of robots with 32 degrees of freedom doing pretty complicated plans. See if I have that Kuffner video there. It's an animated GIF. This is H7 at the time. This was the University of Tokyo humanoid entry before everybody started giving out ASIMOs. A little before Honda started giving out ASIMOs this is. And it's doing things like taking that 32 degree of freedom robot, finding a plan to have it bend down and pick up a flashlight under the table. And that sort of shocked people. Now, people have gone off and used RRTs to do things like protein folding. It's was a pretty hot topic in the motion planning community and these other very high complexity geometric planning problems that people just haven't done before. But even this guy, even though it's a robot, his feet are flat on the ground, yeah? And they're still just assuming that you can search in position space and then find some controller that'll stabilize it. So they're not doing the underactuated planning example. All right, so I wouldn't be talking about it if I didn't think it was relevant for underactuated planning. So let's show you the basic story here. What happens if we do it on a simple pendulum? There we go. So you see, I'm plotting only every 30 nodes or something like that. I pick a point at random. I start growing the tree. The way I grew the tree, now, is I don't go all the way towards the goal because I can't necessarily go all the way towards the goal. I just take a step in the direction of the goal. How do I take a step in the direction of the goal? I just try five random actions, let's say, and pick the one that got closest to the goal. So if you look closely, every time it expands a node, you'll see a bunch of different arrows coming out. And it picked the one that got closest to that random sample point. And it's still just using the Euclidean distance to try to find the closest neighbor. This one I think I wasn't careful about doing the terminal condition. But I want you to see what the most important thing about this is, what do you see in that plot besides a mess as it gets adding more and more? What I see is I see the fundamental structure of the pendulum. I see these spiral trajectories. It's amazing to me that trivial randomized algorithms like this can probe the basic dynamics of my underactuated system. All right, so if you see the algorithm is, now, very simple. So I pick a point at random. That's actually a good example I just happened upon. Pick a point at random. The red is my random sample plane. The blue is the closest plane in the tree in the Euclidean sense. It happens to be a horrible choice for the closest point on the tree in the dynamic sense because I know, in state space, my dynamics can only go that way. So really, I'd like to have picked a point back there that was farther and Euclidean distance, but closer sort of in dynamic distance. And I'll actually talk more next lecture about how you could do that. But from that note, I try 10 different actions, see where it would have taken me, and pick the one that ended up closest in Euclidean distance, which happens to have been the one that happened horizontally. That's enough to let this thing just sort of spread itself out into the world like a disease or something, just covering the space and probing the dynamics of the system. And you wait until you get close to the goal. RRTs are very, very powerful way, now, to start trying to find reasonable plans on a robot. That's the vanilla RRT algorithm. There's a lot of ways you can quickly improve them. And actually, any of the graph-based search algorithms do this. So if I wanted to do some simple things to try to speed up that algorithm, what might you suggest? There's a couple of reasons why it felt inefficient. One of them is because it's not doing a very nice job of chasing these sort of intermediate goals, like that example showed. The other thing is it seems like it's not really making any effort to go towards the goal. So there's a handful of heuristics that people use to make these things tick. One of them is a goal bias. I mean, there's lots of ways to implement it. The standard way to implement it is with, let's say, probability of 0.5 or something like that, 0.05, choose my random sample to be the goal. Otherwise, my random sample is just the uniform distribution. That would certainly encourage the system to find its way to the goal a lot more efficiently. What's another possible way to speed this up? AUDIENCE: Bidirectional. RUSS TEDRAKE: Bidirectional, yeah. In fact, all of the search techniques, the breadth versus depth versus A star on discrete graph searches, a lot of times one of the standard things people do is they'll do a backward search. While you're growing out from the start towards the goal, you might as well start from the goal and go backwards. The problem with this is it feels like you're trying to find a needle in a haystack. If you can grow a tree in both directions, then at least you got a lot of needles to look for. And you just wait till the trees come fairly close. And even backward search sometimes by itself can be faster. It depends on the system. AUDIENCE: Is that because, if your goal is an unstable point [INAUDIBLE],, it's more likely to find it? Is that intuitively what happens? RUSS TEDRAKE: Why sometimes a backward search would work better? AUDIENCE: In cases like underactuated. RUSS TEDRAKE: It's a really good question. AUDIENCE: Because you might have trouble going to some type of fixed point, but that for sure already starts you out there. RUSS TEDRAKE: I won't put my weight behind it, but that sounds like a pretty good explanation. In practice, we always do bidirectional trees. AUDIENCE: Even when you want to do both, you never just go backwards. RUSS TEDRAKE: Yeah. AUDIENCE: OK. RUSS TEDRAKE: But certainly, in the kinematic case, you could definitely imagine cases where the backwards could be faster. Let's say it's hiding in some little island, and you just want to get out. And that's much easier than finding your way into the island, let's say. So certainly, we've noticed sensitivities to growing RRTs around very unstable fixed points. And it might be exactly what you said. It might be. Yeah. AUDIENCE: So is the Euclidean norm the best measure for this? RUSS TEDRAKE: Good. The Euclidean norm is a horrible metric for this. What do you think would be a better metric? AUDIENCE: Energy or times [INAUDIBLE].. RUSS TEDRAKE: Awesome. So energy's a good candidate. In fact, I think, depending on what we put on the problem set, we might ask you to try an energy metric to do it. So almost everybody says this would work better with a better distance metric. And almost nobody can tell you a better distance metric for these kind of systems. So we actually have a couple of ideas that we've been working on in lab. Elena has been working on some LQR-based distance metrics, for instance. So what did you just say? You said time to arrival. How would you compute a time to arrival? AUDIENCE: You have to do a moment in time. RUSS TEDRAKE: Right. AUDIENCE: Muscle control. RUSS TEDRAKE: Exactly. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: So you just guessed the one that's one of my favorites right now. So one idea here, not fully proven yet, but it's still in the oven here is that, every time, I pick a random sample point I'll linearize the system around that point. Say that's a cartoon of a linearization. Compute a LQR stabilizer around this. And then use the value function for a minimum time problem. So compute a min time LQR actually around this point. And then use the value function as the distance metric. So I like that idea a lot. There's a couple subtleties that go into it. So when you pick a random point in state space, if it's got a velocity or-- most points you pick randomly in states space are not stabilizable. So you can't do an infinite horizon minimum time LQR. You have to do a finite horizon and then potentially search over horizon times, but you can do that, actually, pretty efficiently. The other issue about it is that it's actually not a proper distance metric. Why is it not a proper distance metric? Anybody? AUDIENCE: It doesn't [INAUDIBLE].. RUSS TEDRAKE: So I think-- [INTERPOSING VOICES] It probably doesn't follow the triangle inequality. It doesn't even have symmetry. If I were to linearize around this point and compute its distance, linearize around this point, there just absolutely no guarantees that it would be the same. But I don't care about that actually. I just have to put it in the paper. As long as it works, right? It's also more expensive a little bit, but it can make a dramatic improvement in the way it spreads. There's another idea. I was actually planning to talk about some of the more subtle ideas next time. But since we're at it, the other way to do it is to change the sampling distribution. So Alec Shkolnik in our group has got a very clever way to grow these trees, where it uses a sampling metric that doesn't try to go where it can't go. Let me see if I can say that more carefully. I told DARPA about it three days ago. So I just want to show you the picture. OK. So this is Alec's algorithm for being more clever at expanding. He changes the bias to the point where this is the tree we just saw it, basically, right? And his new tree looks like that. So it looks pretty compelling. How did he do it? He does this thing he calls reachability guided RRTs. Every time he expands a node, he also expands what would happen if I had applied maximum torque and minimum torque. So he expands a couple nodes that represent a cone of feasibility. The algorithm turns out to be trivial. So now, I take my Euclidean distance metric or whatever it is, pick my random sample point. If the closest point is one of these boundary feasible things, that means there's room to grow the tree. If the closest point, however, is one of the nodes in my actual graph, that means I'm not feasible. I said that quickly. But long story short, there's a trivial check here, which effectively changes your sampling distribution just by saying sample a point. If the closest point on there is not one of my boundary regions, but one of my normal tree elements, then I know I can't grow there. And if I get that, I just throw that point away, pick another point. What that does is it effectively changes the sampling distribution to be the green region. It just throws out all these unnecessary points that you can't get towards and changes your sampling distribution, so it's only trying to grow in places that it can actually go, to where the trees got a chance of growing towards. So I think the world is still sort of small enough that people are still finding pretty clever little tricks to the RRTs that make them grow a lot better in constrained systems like this. So that happens to be, I think, a very good one for our kind of problems. But it also points out what RRTs are bad at. So what kind of problems would you expect an RRT to work on? And what would you not expect an RRT to work on? The classic case of something you shouldn't expect an RRT to do very well on is something that looks like this, let's say. x start is over here. x goal is over here. And my obstacles look like this. In general, I think it's fair to say RRTs don't like tunnels and tubes. What's going to happen if I run my RRT on this sort of a system? So you know, I got my tree here. Pick a point here, fine. Pick a point here, I'll quickly cover this area. Pick a point here, I throw it away. Pick a point here, I throw it away. Pick a point right here, can't get to it. It doesn't do me any good. You know, after a very long time, maybe I luck out and I pick a point here, right? But then I have to wait, and I pick point here. I can't get to it. You know, I'm just completely hopeless. If it's in two-dimensional space, then you can imagine sort of getting lucky enough or designing a sampling distribution that does this. But let's say it's in some 14-dimensional space that you have no idea where the tunnels, the tubes, are. There are some problems where RRTs just choke. AUDIENCE: So in general, when there's basic solutions, feasible solutions, it's small? It's that fair to say? Because this is a very small set of [INAUDIBLE] space. RUSS TEDRAKE: It's a little hard to say that, right? Because, I mean, lots of trajectories work, right? This one works this. This works, you know? But I think, when there's narrow passages, I think that's a more clear way to say it when it chokes. AUDIENCE: And so when you say chokes, it means it's, as times goes to infinity, it'll finally find it. RUSS TEDRAKE: Yes. AUDIENCE: It is [INAUDIBLE]. RUSS TEDRAKE: Yes. AUDIENCE: OK. RUSS TEDRAKE: Good. So these algorithms, mostly what you look for in a planning algorithm is completeness. You want to know that, as you expand all the nodes, eventually if there's a path to the goal, you'll find the path to the goal. And even if there's not a path, it'll tell you there's no path, right? RRTs have probabilistic completeness guarantees. They say, if you expand enough nodes, then with the probability one, if there is a path, you'll find it. Probabilistic completeness doesn't do as much for disproving the existence of a path. It would take a very long time to disprove it. But probabilistically, these things will find a path to the goal if it exists. That's good. Maybe that'll make you sleep better at night. But if you have to wait for something to be probabilistically complete, it's not going to be useful. The reason these things are used like crazy is that, in practice, they're very fast. They can be very, very fast for finding good solutions. So really, I mean, the Grand Challenge vehicle is using RRTs all the time to consider possible future paths along the street, trying to avoid obstacles. It's doing it fast enough sort of in a receding horizon way. It's always planning a minute ahead. I'm not sure exactly what the window is in driving time. And it's doing it every DT roughly, re-computing its plan. These things can be really, really fast in practice. To some extent, the dynamic constraints which we're talking about on the pendulum are exactly this problem. Because the pendulum, like we said, it's coming here. We know what the phase portrait of the pendulum look like. And we know that, if we're torque limited, there's only a handful of places we can go. To some extent, kinodynamic constraints, like this, dynamics can look exactly like this. So if you take your vanilla RRT, you saw how effectively it grew on the pendulum. It took a lot a lot of nodes to find its way out. The hope is that we know enough about this sort of dynamical system from some of our other tools, like LQR, to do distance metric from tricks like this that we're effectively guiding this thing down the differential constraint tubes. AUDIENCE: Have you had any [INAUDIBLE] with relaxing the number? Like, for example, if you have like a 15-dimensional robot, you're going to get to [INAUDIBLE] position. Can you relax enough those dimensions and then use the rest [INAUDIBLE] under control in a very low dimensional space and then use that as a [INAUDIBLE]?? RUSS TEDRAKE: Awesome. Yes. So Alec also had-- I'll tell you what, my next slide, this is Alec's other thing about planning in a task space, which is exactly what you said I think. There's lots of ways to do it. And I think Alec found a really good way to do it. This is just an example of a five-link robotic arm just in configuration space. They had to find its way from that endpoint to the goal endpoint. So you see at the top, that little green arm is the resulting configuration of the robot. The standard RRT looked all over the place to try to find that solution. It's a little subtle how he had to do it. You had to basically change the Voronoi bias to live in the task space, but still sample uniformly from the other space. So that's not enough to tell you how to do it, but that's enough to tell you it's not completely trivial. Then he found these really, really elegant solutions to plan in task space. And the result, his sort of killer plot, is that this is the number of links in the robotic arm versus the number of nodes it took to find the goal. And the standard RRT went like that. And after 10 links, it just was hopeless. With n going to over 1,000 dimensions, he was just planning in constant time basically. So, yeah, there's the structure in the problems. We also, using the partial feedback linearization task space, have tried doing it on underactuated systems. And that sort of works, too. Yeah. But it's not the norm, actually. It's not the accepted version of the algorithm yet. RRTs are beautiful things, right? They just grow out, find their way to the goal. Potentially, a complaint is, when you're done with it, you've got the sort of choppy discretized path that gets you to the goal. But then just hand it to your trajectory optimizer as an initial condition and let it smooth it out, still satisfying obstacle constraints, torque constraints, We know how to do that and smooth it out. And you've got yourself a good trajectory that gets from the start to the goal. It's a beautiful thing, I mean, I think to the point where Tomás Lozano-Pérez invented configuration space, did motion planning for however long. He actually stopped robotics, went off and did computational biology for a while. And he says he came back to robotics just a few years ago because these kind of algorithms made a difference to the point where it was worth doing motion planning robotics again. I don't a more dramatic thing to say than that. OK. So next time, I'm going to go a little bit more into using these planning algorithms and also feedback motion planning applied to our model systems, but I don't want to completely segue. That's a big idea. These RRTs-- simple algorithm, big idea. I mean, the idea that a randomized algorithm can do better than our explicit algorithms, it's all these funny things. There's classes on randomized algorithms here. Every once in a while, these things just really work nicely. Randomization can help. And the reason, roughly, may be is that it's actually pretty cheap to pick a sample point. It's pretty cheap to evaluate if it's in a collision. And so why not just pick a lot? Instead of trying to reason about the geometry of your obstacles, building configuration spaces, that's hard compared to just saying, OK, what if my robot was here? Was that a collision? Yeah, that's in collision. OK. Throw it out. And just sometimes randomized algorithms are better. OK, see you next week. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_10_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: OK, welcome back. So last week, we spent the week talking about policy search methods, and trying to make a distinction between those and the value-based methods we started with. And by the end of the week, we had a couple pretty slick methods for optimizing an open-loop trajectory of the system. So we talked about at least two ways. So by open-loop, I mean it's a function of time, not a function of state. We talked about the shooting methods, where we evaluated J of alpha x0 times 0 just by simulation. And we evaluated-- explicitly evaluated the gradients by-- well, I gave you two algorithms for it. I gave you one that I called back prop through time-- which was an adjoint method-- and another one that I called RTRL-- real-time recurrent learning, which are the names from the neural network community, but perfectly good names for those methods. And then the claim was that, if you can compute those two things by simulation or-- forward simulation and then a back propagation pass, or a simulation, which carried also the derivatives forward in time, then we could hand those gradients to SNOPT or some other non-linear optimization package. And if we're good, we can also lean on SNOPT to handle things like final value constraints. If you want to make sure the trajectory succeeds in getting you exactly to the goal or if you want to make sure that your torques are never bigger than some maximum talk allowed, then you can take advantage of that. And the second method, remember, was direct co-location method, which we often abbreviate as DIRCOL. And the big idea there was to over-parameterized our optimization with the open-loop trajectory, but also the state trajectory, which makes coming up with gradients simple. And then I have to enforce the constraint that x of-- let's say, in discrete time here, n plus 1 had better be subject to the dynamics-- so two very similar methods of trying to compute some open-loop trajectory as a function of time. Ultimately, what I care about is a set of actions that I apply over time that will get me to the goal or minimize my cost function. In the case where I explicitly parameterized an open-loop trajectory, both of these results in a solution which satisfies the Pontryagin minimum principle, subject to discretization errors and things like that. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: We did, right. So I should say, subject to time discretization. That's the one place where technically, it would satisfy a discrete time version of Pontryagin's minimum principle. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: You can think of it whichever way it makes you happier-- so in fact, the parameters that you hand in-- maybe it's easier to think of it as a function-- a discrete function of time, because you're going to hand it u at certain points in time, and you're going to handle x-- hand it x at certain points in time. And this discrete time update can be an Euler integration or a higher order integration of your continuous dynamics, but you only satisfy the constraints of discrete intervals of time. Yeah. OK, I did give you a slightly more general-- I tried to point out that these methods could equally well compute, find good parameters of a feedback control or something too. The simple case was when my parameters alpha were explicitly my control tape, but more generally, if you wanted to tune a feedback controller-- a linear feedback controller, or a non-linear feedback controller, or a neural network, or whatever it is, you can use the same methods to do that. I would only make this statement in the case where the controller specifically is the open-loop tape, because if I parameterized my trajectory by some feedback controller, for instance, then that's going to restrict the policy class. That's going to restrict the class of tapes that I can look over, which makes it a more compact, more efficient way to solve your optimization, but potentially prevents you from achieving a perfect minimum. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yep. So by virtue of saying that they satisfy Pontryagin's minimum principle, we know that that's only a local optima. This says that I can't make a small change in u to get better performance. Yep. But it's only a necessary condition for optimality, not a sufficient one. But there's a bigger problem with it-- with both of those. And that's the fact that they're completely useless in real life, unless I do one more step, which is to stabilize the trajectory as I get out. So finding some open-loop trajectory by these methods, satisfying Pontryagin's minimum principle-- fine. But there's nothing in this process that says, if I don't-- if I changed my initial conditions by epsilon, I could completely diverge when I follow-- when I execute that open-loop trajectory. If I change my simulation time step by a little bit, I might diverge. If I have modeling errors, I might diverge. So in order to make these useful for a real system, we have to do another step, which is trajectory stabilization. And it actually follows quite naturally from the things we've already talked about. OK, so today we're going to give these guys teeth with a trajectory optimization. And I'll show you examples of a trajectory that's optimized beautifully for the pendulum even, and if I simulate it a little differently back-- just does the wrong thing. It never gets to the top. So we want to get rid of that problem. OK, so the solution is to design a trajectory stabilization. Now, for those of you that have been playing with robots for many years, when you hear trajectory stabilization, what do you think of? What kind of tricks to people use for trajectory stabilization? AUDIENCE: Sliding surfaces. RUSS TEDRAKE: Sliding surfaces-- that's a good one for-- [INAUDIBLE] often will design a sliding surface and squish the aerodynamics. That's actually pretty encompassing. I think a lot of the trajectory stabilizers are based on sliding modes or feedback linearization in some form. And all I'll say about it is that the story's sort of the same as everything we've said. If you have a fully actuated system, it's not hard to design a trajectory stabilizer. A good sliding mode controller could take-- could work even for an underactuated system, but I think there's a-- I prefer the linear quadratic form of these trajectories stabilizers. OK, so we want to do a trajectory stabilization that's suitable for underactuated systems. And the approach is going to be with LQR. OK, so if we're going to use LQR, we better be able to linearize our system. So far, when we've done the linearizations, we've only done them at fixed points. So the first thing we have to ask ourselves is, what happens if we try to linearize at a more arbitrary point in state space? Yeah. So let's say I've got the system x dot equals f of xu, and now I want to linearize around some x0, u0, but not necessarily a carefully chosen x0, u0-- just something random in state space. The Taylor expansion of this says that this thing's going to be approximately f of x0, u0, partial f, partial x evaluated at x. OK, and we called this before A and this B, and so that thing we can actually write as, in general, in the case where f of x0, u0-- if x0, u0 was a fixed point of the system, that term disappears, but be careful. If you're doing your linearization out here, if you're at-- not at a fixed point, if you have any velocity, for instance, then, in the original x-coordinates, it's not actually-- the Taylor expansion doesn't give you a linear system. It gives you some affine system. This thing is harder to work with-- not incredibly harder, but harder to work with. The solution is quite simple, but I just wanted to say it the bad way first so that you appreciate the good way. If we change coordinates and we use instead for our coordinates the difference between x and x0 of t, then x bar dot is going to be x dot minus x0 dot equals x dot minus f of x0, u0, which is that C. This guy here is taken care of in this new coordinate system, which allows me to write the whole thing as x bar dot equals A of x bar. You with me on that? Linearizing a system at a more arbitrary point-- doing a Taylor expansion results in a linear system only if you change coordinates to lie on some system trajectory. So x0, u0 must be a solution of x of f of xu of that equation. And then the system reduces to a linear system description. But the cost you pay for this beautiful, simple-- well, let me be even a little bit more careful. So A here, this partial f, partial x, is evaluated at xt u of t. And in general, A and B in this-- when I do this are functions of time, as well as x and t. That's a pretty important point. So if I'm willing to change coordinates to live along the trajectory, then the result is I can get this linear time-varying model of the dynamics along feasible trajectories-- system trajectories. The cost is that you have to work in a coordinate system that moves along your trajectory. So we'll see where that comes in in a little bit. But the first question is, OK, let's say I've got this linear time-varying-- time-varying linear system. Can I do all the things I want to do with that? In most of our control classes, we end up doing LTI systems. LTV systems-- linear time-varying-- are actually a fantastically rich class of systems that we don't talk about enough, I think, in life. They're still linear systems. Superposition still holds. If I have initial condition 1 and some u trajectory 1 for t greater than equal to t0, and that gives me some resulting x trajectory out for t greater than t0, and I have another solution with a different initial condition and a different control, and that gives me a different-- I call this x1, x2 for t greater than or equal to t0-- if I have that, then it better be the case that alpha 1 x1 of t0 plus alpha 2 x2 of t0 plus alpha 1 u1 tape plus alpha 2 u2 tape is going to result in a trajectory which is alpha 1 x1 plus alpha 2 x2. That's superposition. That's the defining characteristic of linearity. And even though this is a richer class of systems-- these A of t, x of t, B of t, u of t-- superposition still holds. And in fact, a lot of our derivations that we've done that are for linear systems still hold. OK, so now the question is, how do we design-- how do we work with the fact that this thing is still easy, and design a controller that works with this new linearized system? Maybe first I should break out my colored chalk and make sure we have intuition about this. Do you understand what this is doing, if I do this time-varying linearization? Let me do an example with the pendulum here, our favorite theta, theta dot. And let's say we carve up-- we find some nice solution which gets me from my one fixed point to the other fixed point. The ones we were getting were these pump-up trajectories, which looked something like this. I'm moving through state space here, and the dynamics here vary with state in a non-linear way. But if I have a trajectory, a feasible trajectory that goes through the relevant parts of state space, then this time-varying linearization takes my non-linear system, and makes it parameterized only-- instead of by being parameterized by state, it's going to make it parameterized only by time along the trajectory. The trick is the trajectory allows me to reparameterize my non-linearity in terms of time, instead of state. It sounds like a simple thing-- I'm just reparameterizing it-- but it makes all the difference in the world. If things are parameterized as a function of time, and are otherwise linear, then I could do all kinds of computation on them. I can integrate the equations. I can design quadratic regulators on it. It makes all the difference in the world. So what I'm effectively doing is coming up with local linear representations of the dynamics along the trajectory. I'm not sure if this is a helpful way for me to draw it, but you can think of this thing as approximating the dynamics along that trajectory. At every given instant in time, I'm going to use one of these linear models. This is supposed to be some plane that you're driving through-- not sure if that's actually helpful graphic, but it's the way I think of it. And by virtue of taking a particular path through, I can make locally linear models on which these things have eigenvectors, and eigenvalues, or whatever that are valid in the neighborhood of the trajectory. So if you can imagine, even without any stabilization, it could be that I could quickly assess the stability of my time-varying linear model. And trajectories in this linear model may converge to the nominal limit cycle, or they may diverge, depending on A and B. Or they may blow up. This is by far the more common case, unfortunately. You'd be very lucky to come out of a shooting method or a direct co-location method, and end up with a system where if you played it out, it just happened to be a stable trajectory. But we can assess all that quickly with these time-varying linearizations found locally. Make sense? Yeah? AUDIENCE: [INAUDIBLE] talk about that there is a bad way of doing this. This is not a bad way of doing this, right? We were talkinga about it. RUSS TEDRAKE: If I do a Taylor expansion of my system in the original coordinate system, which is x, then it's not linear. End parentheses, that was the bad way to do it. Yeah? Good way to do it-- change the coordinates to a coordinate system, which moves with the trajectory. If you do that, things become time-varying linear. That was a good way to do it, and that's still in open parentheses. We're still going. Yeah. OK, so our task now is to design a time-varying feedback controller-- since our model is time-varying, you'd expect our solution to also be time-varying-- which takes these bad, unstable trajectories of the system-- and they really are-- I'll show you simple pendulum. This trajectory comes out. Actually, if you just integrate in a different way, it'll go off and do the wrong thing. It typically doesn't go off and add energy to the system so much. The ones I get-- I see, I'll show you, are more-- they diverge and the other way, and end up just floating around here, for instance. But they're not going to get you up here. So can we design a time-varying stabilizer that regulates that trajectory? OK, I did actually do the original finite horizon LQR derivation on the board that day-- definitely won't write all that again, but let me say that roughly nothing in that derivation breaks-- I'm going to show you the important pieces-- nothing in that derivation breaks, surprisingly, if A and B are now a function of time. So let's remember that-- the LQR derivation. Now I'm working with this x bar coordinate system. And I want to design a cost function to minimize here, which lives in this coordinate system again here. Let's say it's the final horizon times Qf-- I've been trying to use t little f, since my transposes look like the final horizon time otherwise-- 0 to tf dt x bar-- again, transpose Q plus u bar Ru. OK, in the original LQR derivation, we guessed that the form-- that the optimal policy had the form x bar S of t x bar. That's still intact. That's still a good assumption. This thing's linear. It's just in a different coordinate system. And we started cranking through the sufficiency theorem, the Hamilton-Jacobi-Bellman equation. And we found that our optimal feedback policy-- first of all, our optimal cost-to-go was described by this Riccati equation, which was negative S of t is Q minus S of t B our inverse B transpose S of t plus S of t A plus A transpose S of t. And it turns out that, with the-- if you have a time-varying A and B, that it's-- exact same dynamics govern it. You just have your time dependence also in A and B. And that exact same Riccati equation works, and our final value condition was just Qf. And you can see from this, if it didn't make a difference for me when A and B became functions of time, it's pretty simple-- although less interesting, I guess. If Q were to be a function of time-- no problem. If R was a function of time-- no problem. They still have to be positive definite and symmetric. Oops-- I did it the wrong way. Q can be 0, but R can not be 0. OK, so the LQR you know and love, that you've used in Matlab, is the time invariant infinite horizon LQR. I told you that, if you cared about a finite horizon and you had a time invariant linear system, then suddenly you had to-- you couldn't just find the stationary points in this. Remember, Matlab's solution just tells you the long-term behavior of S. In the time-- finite horizon time, even the LTI case, which is the A and B do not depend on time-- the linear time invariant case-- I still had to integrate back this Riccati equation in order to get my LQR controller. It's no more expensive to do the same thing in the linear time-varying feedback case. And the resulting controller is-- u star is my nominal controller minus my R inverse B transpose S of t x bar. These equations come up enough that these are pretty famous, pretty important equations, and so I-- those I know off the top of my head. They come up all the time. And this is the resulting optimal trajectory, which is my nominal trajectory plus my feedback gain, which came out of my original LQR controller, if you remember that. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yes-- good. I should definitely put a T under B. Thank you. I haven't written that case, but R could equally well be time-dependent. OK, so something big just happened. I can take a really, really complicated non-linear system along some trajectory-- if I find a good trajectory, then I can actually linearize that system along this trajectory and stabilize it. The thing I haven't convinced you of yet-- because I only know how to do it from showing examples, but it really works well. So even though it's a linear system-- it's a linear approximation of the non-linear system, something like the [INAUDIBLE] or the cartpole swing-up. It's got a huge basin of attraction. Lots and lots of initial conditions will find their way to the trajectory and get to the goal. If you want to do non-linear control of a humanoid robot or something like this, this actually scales pretty nicely. I just have to solve this equation. S is the size-- is a matrix that's by number of states. But I could do that in 30 dimensions. That's no problem. And even for very non-linear systems, local linear feedback works very, very well-- so well, in fact, that I think that, if you ask-- and when I did ask the [INAUDIBLE] guys, Sasha Megretski says, this is definitely what I would do if I was controlling a walking robot or something like that. We're trying to do the same thing to control neurons in a dish now. We're trying to build good models of the dynamics-- time-varying models, for instance-- and then doing this kind of control. Yeah. It works really, really well. The only complaint about it is that it's going to have-- it's based on this linear approximation, so it will have a finite basin of attraction. For some systems, it can be quite big. If you have systems with hard non-linearities, it won't be as big. Later in the course, I'll show you ways to explicitly reason about the size of those basins of attraction, but today let's just say this is a good thing to know, good thing to have in your pocket. Let me show you a working-- try to convince you that it's pretty good. OK, so let's see where I've left myself here. I took this-- the pendulum-- let's do the shooting version. They both work fine, but let's do the shooting version. Is that bigger than I did last time? That's pretty obnoxious. Maybe it's always been obnoxious. Can we get away with that? Yeah. You guys are like, I'm not blind. OK. So I showed you last time the shooting code. It comes out with a resulting tape x, t, and u. After the result of these trajectory optimizers, whether it's shooting or whether it's direct co-location-- whatever it is-- it comes up with some open-loop tape. I put x in there too just to-- as the reference trajectory that results, but what really matters is the time stamps and u command, the open-loop tape. Why don't I save it this time? OK, so it comes up-- in this case, with these parameters I've chosen, comes up with some one-pump policy. With the torque limits I have, the [INAUDIBLE] I have, it comes up with a one-pump policy that gets me to the top in four seconds. OK, let me now just simulate that a little bit differently. So the only thing I'm going to do here now is-- this control_ode is just a simulation which plays back exactly the same open-loop tape, but it plays it back with a little more careful integration-- because in the actual-- in the shooting code I used, I used the big time step just so I don't waste time computing gradients to the n-th degree of accuracy. That's not worthwhile. If I simulate the exact same thing back with a more careful ode integration, let's see what happens. So that was that same trajectory that-- exact same control inputs, just simulated more carefully. It made its honest effort to get up there, but it didn't quite get up there, turned around, and came back down. I'm trying to show it also in just-- this is the different state trajectories over time. You can see that the red and blue lines are the desired versus actual in the-- in theta, in this case. And these two lines are the desired versus actual in theta dot. They start off exactly on top of each other, but just little differences in the numerics causes them to go in different directions-- part ways. OK, so now I've got this LTV LQR solution, which is exactly what I just showed you. So I was just simulating a just now with just u being the nominal u. Now I'm going to add this time-varying feedback term, x minus x desired. And now my more careful integration results in a closed-loop system, which not only got to the goal, but actually stayed up at the goal, because I have a stable system all the way to the top. OK? All right, so what I just said was very unimpressive. I said I computed a open-loop policy with my methods from Thursday. I simulated them back. They didn't work. But I then put a feedback controller on, and from the exact same initial conditions, I now can simulate them, and they work. So it's disappointing that we had to do that at all, but I can now-- the stability is more than just stabilizing the initial conditions. Let's add some fairly big random numbers to that initial condition and see what happens. It's recomputing the policy every time, just because it was fast enough that I didn't bother to change it. OK, so that actually started with pretty big different initial conditions. So theta was off by-- I don't know-- 2/10 of a radian or something like this. The velocities were off by 1/2 a radian per second. We could crank that up. I bet it does a lot better than that. But if you watch these things, they converge quite nicely to together at the end there. And what matters is they get up to the top. So again, these things come together, find their way up to the top, and live. I bet, if I put it a lot bigger, it'll still work. I normally do an order of magnitude, but let's not be silly. Oh-- didn't make it. There's only one reason it didn't make it, actually. It's because, if you look in here, I'm actually honest about implementing the max torques. Yeah. So I actually have a torque limit, I impose it, and it lives on there. If I didn't, I bet I could convince you it works for any initial condition. But let's try it one more time-- get a little more lucky with the initial conditions. Oh, come on. Come on. Yes. OK, that was pretty far off, and it's still found its way back to the trajectory. Good-- yeah? Look at how big those initial conditions are. There and there versus-- wow, that's really good. OK. Did I see a question? No? All right, so this stuff works for pendulum. It works for more interesting systems too. I'll just show you the cartpole real quick here. I won't do the-- here is what it looks like without feedback. I'll just do the initial conditions corrupted solution, pump up-- OK, so if you remember my solutions from last time, I never drove off the screen before, so that it was actually it catching it by deviating enough that it came off the screen, and then slowly coming back to the top. It must be its x position or something going way off. No, not x position-- what is that? That's my control. Yeah. Did I do torque limits on that one? I still did torque limits. I just set them high, I guess. Yeah. So it really works. And the cool thing is the cost of implementing that LQR LTV stabilizer was negligibly more than implementing the-- most of that time was the shooting optimization. Yes? AUDIENCE: Why do you always start at the 0 time? You could look at the initial conditions and look where is the closest point on my nominal trajectories and then do your control policy from that moment in time. RUSS TEDRAKE: OK. So that's a really, really good. OK, that's exactly what I want to talk about next, actually. I designed a time-varying feedback controller, is negative K of t x bar of t. I designed that ahead of time. And then, from the initial conditions, I started simulating from 0, and I just played out the-- my nominal trajectory just marched forward with time, my feedback controller just marched forward with time, and my aerodynamics just marched forward with time. OK, so before I explicitly address your question, let me point out-- let me ask even a simpler question here. If I had plotted that in state space, what you would have seen is that the trajectory starts off somewhere in state space and comes together. That would have a good idea. Maybe I should do that in a minute. But it comes together and finds its way onto that trajectory. Yeah? OK, so here's the question. Instead of just changes in initial conditions, what happens if I have disturbances that push me off the trajectory? Well, that's OK. That's no different really than a different initial condition. They'll come back on here. What happens if I have a disturbance that pushes me along the trajectory with this controller? Let's say I've got the helpful disturbance, which, when I was right here, just happened to push me right to there. What's my feedback controller going to do? AUDIENCE: Slow it down. RUSS TEDRAKE: Yeah-- probably in a dramatic fashion. It's the same way-- it tries to quickly converge from here. It's going to push itself back towards that point, possibly. Slowing down doesn't-- makes it sound-- no big deal. It can't go backwards, but it might try to do something more severe to try to catch up with that old trajectory. So the major limitation of this is that it's blindly-- in order to have the strong convergence properties that we have, the controller is blindly marching forward in time. The great thing about switching to a time parameterization I can compute everything-- everything's linear again. The bad thing is you're a slave to time. So Phillip asked a next question. He says, so why not-- why do I just blindly start marching forward from time 0? Maybe, if I have a controller, I should just look for the closest point in my trajectory, and then, instead of indexing off time, index off some sort of phase, some fraction of my trajectory, and then execute that controller. And you can do that. I wish you the best if you do that, but my suspicion is that, if on every dt, you pick the closest point in the trajectory, then the result is you're going to chatter like you wouldn't believe. So there's a lot of protection you get when you-- you could think of this very much as a gain-scheduled linear controller. This is a time-varying gain scheduling, and the problem is if I switch gain quickly, then you're going to get chattering. So it might make a lot of sense, for instance, if you were to get a big disturbance, to re-evaluate, and try to find the closest point, and start executing that new policy with time re-indexed. But it's probably a bad idea, in my experience, to decide which part of the trajectory you're closest to on every-- every dt. That's probably a bad idea. Yes? AUDIENCE: Could you maybe play some tricks if you had some idea of the basin of attraction of the current point you're trying to get to? And if you know that you're outside of it, then work around it, [INAUDIBLE]?? RUSS TEDRAKE: Yes. So I have a particular trick the does that does that in-- we'll talk about it in the motion planning, but-- yeah, so Mark knows about these tricks for computing basins of attraction pretty efficiently. And so these days what we do is we actually try to compute the funnel-- the basin of attraction of this trajectory around the trajectory, and you could know discretely if you left that basin of attraction. So I'll give you the recipe for that, but it actually makes more sense, I think, in the motion planning context, where we actually will design trajectories that fill the space with these basins. This is very similar to the concept of flow tubes. Yes. OK, so big idea-- turn my non-linear system into a linear time-varying system, because I've re-parameterized it that along the trajectory. Do linear time-varying control, and even really complicated systems-- it'll work well. We're doing on our [INAUDIBLE] plane. I mean, it's really a pretty good idea. When I first started working with it, I thought that it would have the problem that-- it would have the property that it uses a lot of control to force itself back to the trajectory and rigidly follow the trajectory. It's easy to equate linear control with high-gain linear feedback, which people do a lot of, but it doesn't necessarily need that property. If R is small in this derivation, it can actually take very subtle approaches back to the trajectory. Your system might come in and do whatever it needs to get back on the trajectory with very little torque. The only price you pay is, if your torque is smaller, if you're penalizing torque use higher, then you might restrict your-- that might shrink your basin of attraction. It might be that, because it's trying to use less torque, it will not overcome the non-linearities. But in the neighborhood of the trajectory, you can get these very elegant solutions which look like minimal energy kind of solutions for the non-linear problem in the vicinity of these trajectories. So one of the ideas we'll talk about later is how do you design the minimal set of trajectories-- which, if you use these controllers, which do the right thing in a lot of places-- if you walked away from this class knowing nothing but direct co-location and linear time-varying feedback control, I bet you could control a lot of cool systems. Yeah. I guess you also have to know sys id, which I'm not going to tell you about. That's the gotcha. You have to have a model for all this stuff. If someone gives you a model, if you're willing to construct a model, then you can do a lot of things with this. OK, I want to give you one more mental picture to think about what this is doing so it launches into the next thing here. So my cost-to-go function, which I just erased, is, remember-- my cost-to-go function, J of x bar t, is x bar S of t x bar. This is a quadratic form. Just like the original LQR, you can think of this as a quadratic bowl. In the LTI LQR case-- am I OK throwing around these three-letter acronyms? In the LTI LQR case, it was a static quadratic bowl centered around the point I'm trying to stabilize-- so my cost-to-go. It said-- says, as I move away from the point I'm trying to regulate, I'm going to incur more cost in the direction-- the rate it grows depends on the variables inside S. Now, in this picture, I have still a time-varying-- I have a time-varying quadratic bowl, but it's also moving through time, because it's based on x bar. So in my pendulum world, if I have this nominal trajectory, you can think of it as having some quadratic bowl here. And the LTI stabilizer that we did come up with that was based on LQR did have some sort of quadratic bowl shape that looked like that. Backwards in time, there's going to be another quadratic bowl. Can I draw it very badly like this? If I can just draw coming off the board a little bit-- so there's some quadratic bowl centered around this point, which is my costs-to-go. At that point, if I marched further backwards in time, I've got some other quadratic bowl around this point. That makes the point, again, that-- if my quadratic ball is currently this because time is 5-- or I had a 4-second trajectory-- maybe times 3 here-- and I'm pushed along the trajectory, it's actually going to incur just as much cost, roughly, as I'm pushed another direction. There's a quadratic bowl literally centered around x0 at time t. That's what this equation says. And this quadratic bowl is the cost-to-go estimate. It says, if I'm away from the trajectory, I should expect the cost I incur in getting back towards that trajectory to be this quadratic form. Is that OK? And the key point is, because I've re-parameterized my equations in terms of x bar, this quadratic bowl always lives on that trajectory. My cost function was x bar Q x bar. My best thing to do is to drive x bar to 0, which means to drive my system back to the trajectory. People OK with that imagery? It doesn't look like they are. Everybody's OK. Are we OK here the LTI stabilizer being an LQR bowl-- or a quadratic bowl? So the farther I am away in the directions defined by S, I'm going to cut some cost getting back. This is just the same thing that says, if I'm at this point in the trajectory, I'm going to cover this cost-to-go. And the best thing to do, the minimal cost-to-go is living right on that trajectory. As a consequence, the optimal controller, which tries to go down the landscape of the cost-to-go, is going to drive you back to the trajectory. Now, I said all that because I'm about to do something that sounds totally wacky. Would it ever make sense for me to design a slightly different cost function, which, when I linearize and design the feedback controller, I end up with a cost-to-go over here? Let's say I have some nominal trajectory. I found, through whatever method, some reasonable system trajectory, but I really-- I'm still not happy with that. The trajectory I really wanted was something like this, let's say. Would it make any sense to do my linearization around this trajectory, and try to drive the system to this other trajectory? AUDIENCE: You mean like scaling your optimal trajectory? RUSS TEDRAKE: I don't even mean scaling. They could cross. They could do whatever. It's not a simple scaling. Let me give you a simpler version of the problem. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Say that again. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yes. I'm going to divine a cost function, which would have it so I prefer to live on that trajectory. Let me do it in the time invariant case just so it's clear. Let's say my coordinate system's back and simple. It lives around 0. Let's say I have that cost function, or actually that dynamics. And instead of-- my original cost function was just x transpose Qx-- let's say my cost function now is-- let's think about this problem for a second. So let's say I have a linear system. Now, the LQR controller we did initially-- little sloppy with that. The LQR controller I did initially always assumed that the desired place you wanted to be in life was 0. If the desired place you want to be in life is a constant-- it's 3, let's say-- then you can still do your linear quadratic regulator. Just move your coordinate system so the 3 is 0. But let's say I've got a linear system, but I want to drive it through some trajectory-- time-varying trajectory-- x desired as a function of time. Then I can't quite just recenter the origin. I've got to think about, how do I drive my linear system through some other trajectories? Now the-- it's actually-- LTI system, but my cost function is time-varying, because my-- I have the desired trajectory that varies with time. The result-- I won't write it down again-- again, I can do this Riccati equation. Back up. The only difference is that the quadratic bowl is no longer going to be centered on the origin. The quadratic bowl is going to move with that desired trajectory. OK? Yeah? AUDIENCE: If that's far away from where you linearized, could you-- RUSS TEDRAKE: That's an excellent question. But this is a linear system, so first, we don't have to worry about that, but don't let me forget to go back to that. So I can drive my linear system through some trajectory that's non-zero beautifully with an LQR controller. The only problem is that my LQR controller has to have has a cost-to-go function and a controller which is not pointing me always at the origin. You wouldn't want that. So in fact, the way it looks-- there's a lot of ways that people derive it. With Pontryagin, it's not too hard to derive. I prefer to derive it with the HJB. I'm not going to do the derivation, but-- I don't mean to bore you, but what you end up with is J of x of t has a form x transpose S of t x plus x transpose-- I call this S2-- S1 of t plus S0 of t. It's a full quadratic form. When I just have this, it's always a quadratic bowl. It's always centered around 0. If you want it, in general, to be a quadratic bowl that's not necessarily at 0, you need the full quadratic form. I could equally well have written this as x minus x something desired, S of t. But let's work with this form. Yeah? So this is just an equation of a quadratic bowl, not necessarily centered on the origin. And the LQR derivation gives me my backwards dynamics for S2. It gives me the backwards dynamics for S1 and for S0. And it's in the notes. It's actually already in your notes. It's in the HJB chapter that has been up there for a while. OK, now, the reason I'm on about all this is that there's another way-- I told you about shooting methods. I told you about direct co-location. There is yet another way that people like to design trajectories, which use LQR directly. And that's this iterative LQR procedure. OK. So let's say I have some trajectory that I've already found, x 0 of t, and I have some different trajectory, which is my desired trajectory, x desired of t. Then, using this optimal tracking-- if you stick back in the time-varying components, using this optimal tracking, I can linearize my dynamical system around that. So I have no guarantees that x desired is a feasible trajectory. In fact, many cases-- it's not. For instance, x desired might be B at the goal at all times. If I came up with a perfectly feasible x desired trajectory, I probably wouldn't be running an open-loop solver. I want to get to I want to get as close as desired-- as possible to the x desired while potentially minimizing cost and respecting the dynamics of the system. Here's one way to do it-- linearize my system around my initial guess, x0 of t, then design a linear optimal tracking-- linear time-varying optimal tracking which tries to regulate my system as close as possible to that trajectory. Now, what Steven said was exactly on point. If I drive my system away from where I linearized, there's no guarantee that my linear model is going to be any good here. But the hope is that this trajectory is better-- a better guess than the one before. And you iterate, make another approximation around there, design the LQR controller, run the LQR controller that drives me here to find the new u tape. That defines my new trajectory-- repeat. OK? That's called iterative LQR. What else is it called? Do you know? Yeah. Do you see that? It's differential dynamic programming-- almost. There's a subtle difference, which I can tell you, if you want. There's a lot of names for it. There's another guy, Bobrow-- some of you know Jim Bobrow-- he wrote this up called the sequential linear quadratic regulators. Any four-letter acronym that ends in LQR-- if you put it in Google, you'll find something that's probably this idea. Yeah. If you put in whatever arbitrary constant in front of it, you'll probably get this idea out. AUDIENCE: What prevents your actuator costs from accumulating from one iteration to another? RUSS TEDRAKE: Every iteration, you're trying to minimize your actuator cost. AUDIENCE: Right, but I mean, if you have a lot of iterations, couldn't that potentially grow? RUSS TEDRAKE: I don't actually add to my old u tape. I actually completely replace my old u tape with a new controller which drives me to the system. AUDIENCE: Oh, OK. RUSS TEDRAKE: So there's no worries about additive actions. It actually tells me in my original non-linear system what's my best guess as a u tape that goes there. AUDIENCE: Is this basically a trick to get rid of the slow [INAUDIBLE]?? RUSS TEDRAKE: So very, very good-- so why would I want to do this? Why didn't I tell you about this first, or why-- how does this compare to the other methods? There is a sense by which-- and I thought about doing the whole derivation, but I think this-- I hope that this short discussion is sufficient. So what I'm roughly doing is I'm using LQR to come up with a quadratic approximation of where my cost-- where my minimum is. This is very much in the spirit of those SQP methods, the sequential quadratic methods. I'm using computation on this line to come up with a quadratic approximation of where I think the new minimum should be. So as such, it's a relatively cheap way with SQP properties, convergence properties. OK. The methods I told you about on Thursday-- the backprop through time, the RTRL-- they computed J over my trajectory. They computed partial J, partial alpha over my trajectory. They did not ever explicitly compute the second derivative. I never computed partial J, partial alpha, partial alpha. To explicitly do an SQP update, somebody needs to compute the Hessian of that optimization. I'm relying on SNOPT to do some bookkeeping to estimate the Hessian to do the second-order update. I would do better if I had an efficient way to compute the second derivatives, and I could hand that directly to SNOPT or whatever, and we'd get-- expect faster convergence. This isn't quite the gradients that I want, but it has that feel to it, and it has similar convergence properties. So what you should think about is you should think about this is a more explicit second-order method for making a large jump in my trajectories with sequential quadratic convergence results. I feel like I've lost everybody now, but ask questions, if you need to. The advantage of it is that it's fast. It could potentially require very few iterations to converge. One of the strongest advantages is that there's no explicit way to do constraints. You have to think harder about how to do constraints in this. And I know less formal guarantees that it will succeed, because it's an approximation of that quadratic. So the RL community uses DDP a lot, and actually, a lot of people who do DDP do iterative LQR, for instance. For instance, Peter [INAUDIBLE] and those guys-- they always call DDP. They're actually doing iterative LQR. DDP explicitly actually has-- you have to do a second-order expansion of your dynamics, so you don't just get A of t x. You actually go to second-order expansion of your dynamics. So it's a little bit more expensive of an update, but most people equate it almost exactly to iterative LQR. AUDIENCE: So this x0 trajectory, this isn't a trajectory you found by doing RTRL or something like that? This is something different? RUSS TEDRAKE: Good-- so this could be a standard replacement to RTRL. I could start with a random x0 trajectory. So maybe it's better to start with a random u trajectory, simulate it, and get an x0 trajectory. And then it will quickly reshape until it gets as close as possible to this x desired trajectory. AUDIENCE: But you're reshaping your control actions that get you to the x trajectory? RUSS TEDRAKE: Yes. So I'm reshaping u, resimulating to get the new x. Yeah. I wrote it more carefully in the notes, and-- but I hope this is the right level to do the class. And there's one extra thing that-- so I say this works if you have a desired x-- desired trajectory, which means your cost function has this sort of a form. The advocates of iterative LQR and DDP say that every cost function has this form. This is just a second-order Taylor expansion of whatever non-linear cost function you want. So write down whatever non-linear cost function you have, do a second-order expansion on it, and you end up with a quadratic cost function like this. And you can then approximate that solution with an iterative LQR scheme-- or RTRL, or backprop through time. This is the third out of our list of methods. My goal is only to know-- so that you know that it exists. And you can read the notes if you want more, and you can read the papers if you want more. OK? Yeah, Michael? AUDIENCE: So I think last time you talked about you're parallelizing the deviation from your non-control input. So what if you were-- like as you iterate the controller, [INAUDIBLE]? RUSS TEDRAKE: Good-- the total cost is actually the cost with respect to some u desired. So I end up trying to optimize that in a coordinate system based on u0, but the cost I'm trying to minimize is the u the original coordinate system minus u desired-- which, in a lot of cases, is 0. Although I do it in a weird coordinate system, and it actually eventually subtracts itself out because I add it back in at the end, and-- it's quite easy to, for instance, minimize u squared in the original coordinate system. OK? So on Thursday, we get to do walking robots. We're going to move on to the next major thing. But you've now learned three of the open-loop trajectory optimizers that people really use-- iterative LQR very quickly, RTRL backprop through time-- I grouped as one-- the shooting methods and direct co-location. There's another one that's recent addition to the scene, which is this discrete mechanics and optimal control, this DMOC. If anybody was excited about that and wanted to do a class project on that, that would be a perfect thing. Grab that paper. Show us that it works on the [INAUDIBLE] carpole. That'd be beautiful. I'd love to have that-- have us try that and see how it compares to the other methods. You've got a pretty good toolkit for optimal control now-- practical optimal control. And it works for flying robots, but it also worked for your wheeled robots, if you want to control them with better control. You could do a drop-in replacement LTI optimal tracking controller, and it would be better-- assuming your model's better. So you have these tools. Quick procedural things-- I know we're out of time. So next Thursday-- well, so let me say the good thing first. In two weeks, you're on spring break. Yeah. The Thursday preceding that is our midterm. We haven't had a midterm in the class before, so there's no old exams for me to give you, but John and I are going to try to come up with some representative problems for you to take home for Thursday of this week so you can have some problems to munch on over the weekend. It'll be an in-class exam Thursday before spring break, which is a week from Thursday. OK? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yes. So open-book-- well, you can grab whatever notes-- open-note exam-- absolutely. Well, I'll say it more in the preparation package, but roughly, we're going to-- I think, if you have your notes with you, if you've done the problem set-- and most importantly, if you know how these algorithms-- where the algorithms relate to each other and where they'd be used in different systems-- I can guarantee I'm going to ask you something about that-- then it's not designed to be a killer. Good-- and I hope you start thinking about projects. Just out of being a fairly nice person, I wasn't going to ask you to do projects before your midterm. But this time last year, I was asking people to submit project proposals. We're going to do that immediately after the midterm. If you've been chewing on, this method looked like a really good match to my research problem, or I've never actually thought about juggling robots before, or something like this, you can imagine-- so in the fairly near future, we're going to ask you for a half-page project proposal that we can iterate with you on to get going on a world-class final project. Yeah? See you Thursday. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_4_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: OK. Welcome back. Since we ended abruptly, I want to start with a recap of last time. And then we've got a lot of new ground to cover. So remember last time, we considered the system q double dot equals u, which is of a general form, just a linear feedback system, which is state space form looks like this, where it happens that a and b are particularly simple. And we looked at designing-- let's not say designing controller-- we looked at reshaping the phase space a couple of different ways. The first way, which is the sort of 6.302 way, maybe, would be designing sort of by pole placement, by designing feedback gains possibly by hand, possibly by a root locus analysis. So we looked at manually designing some linear feedback law, u equals negative Kx. And we did things like plotting the phase portrait, which gave us for q, q dot a phase portrait that looked like this, where this has an eigenvalue of negative 3.75 approximately, and this one had an eigenvalue of negative 0.25. This was all for K equals 1, 4. OK. And we ended up seeing that just from that quick analysis we could see phase portraits which looked like this. They come across the origin, and then they'd hook in towards the goal. And similarly here. This one's so much faster that it would go like this. Then we looked at an optimal control way of solving the same thing. We looked at doing a minimum time optimal control approach, not specifically so that we could get there faster, even though "minimum time" is in the name, because here remember, we could get there arbitrarily fast by just cranking K as high as we wanted, but actually for trying to do something a little bit smarter, which is get there in minimum time when I have an extra constraint that u was bounded, in the case we looked at yesterday was bounded by negative 1, 1. And in that case when u was bounded, now the minimum time problem becomes nontrivial. It's not just crank the gains to infinity. And we had to use some better thinking about it. And the result was a phase portrait which actually, I don't know if you left realizing, it didn't look that different in some ways. Remember, we had these switching surfaces defined here. And above this, we'd execute one policy, one bang-bang solution. And then below it we'd execute another one. And the resulting system trajectories-- remember, this one hooked down across the origin and went into the goal like that. This one really did exactly the same thing, right? They would start over here. They'd hook down here with-- this time they'd explicitly hit that switching surface and then ride that into the goal. So it's a little bit of a sharper result, possibly, than the other one. And that final surface was a curve instead of this line. And for that, we got to have good performance with bounded torques. Now, we also did the first of two ways that we're going to use to sort of analytically investigate optimality. AUDIENCE: Can I interrupt you? RUSS TEDRAKE: Anytime, yeah. AUDIENCE: Was there a good reason we just-- basically said we want to do linear feedback there? Could we have done like x1 times x2? RUSS TEDRAKE: Good, yeah. Because-- well, there's a lot of good reasons. So it's because then the closed loop dynamics are linear, and we can analyze them in every way, including making these plots in ways that I couldn't have done if this was nonlinear. Another answer would be that this is what 90% of the world would have done, if that's satisfying at all. I think that's the dominant way of sort of thinking about these things. x1 times x2 is comparably much harder to reason about, actually. AUDIENCE: I totally get that. But is there like a system that the optimal control that lies in the space that you have to take into account these different approximations. RUSS TEDRAKE: Good. So this is an example of a nonlinear controller. It happens that the actual control action is either 1 or negative 1. But the decision plane is very nonlinear. So that's absolutely a nonlinear controller. It came out of linear-- out of optimal control on a linear system. But the result is a nonlinear controller. OK. Now, certain classes of nonlinear controllers are going to pop out and be easier to think about than the broad class. But we're going to see lots of instances as quickly as we can. OK. So we did-- we actually got that curve by thinking just about-- just using our intuition to reason about bang-bang control. At the end, I started to show you that the same thing comes out of what I call solution technique 1 here. I wouldn't call it that outside of the room. That's just me being clear here, which was based on Pontryagin's minimum principle. Which in this case, is nothing more than just-- let's write it down, exactly what we mean by this cost function. We have some-- let me be a little bit more loose. We have J, some cost function we want to optimize, which is a finite time integral of 1 dt. That sounds ridiculous, but we're just optimizing time. But we want to optimize that subject to the constraints that x dot equals f of x u, which in this case is our linear system; and the constraint that u was negative 1 in that regime; and the constraint that at time t, x t had better be at the origin. Given those constraints, we can say let's minimize T. We're going to minimize that J, sorry. I already got the t in there, so. Minimize with respect to the trajectory in x, u, that cost function. I use this overbar to denote the entire time history of a variable like x t1 to t final, or something like this-- time t0 to t final. OK. That's how we set up the problem. It's just optimizing some function but subject to a handful of constraints. Pontryagin's minimum principle is nothing more than putting Lagrange multipliers to work to turn that constrained optimization into unconstrained optimization. And for this problem, we can build our augmented system I'll call J prime here, which just is the same thing but taking in the constraints. So first of all, we've got a constraint on x T equaling 0. So I can put that in as a Lagrange multiplier, let's say lambda times something that better equal 0, which in this case was just x t And then plus 0 to t1 plus the constraint on the dynamics, which I'll call it a different Lagrange multiplier p, times f of x, u minus x dot, this whole thing dt. Yes? AUDIENCE: How do you impose the constraint that u is [INAUDIBLE]? RUSS TEDRAKE: Awesome. Good question. So it turns out what we're going to look at-- we want to verify that this thing is optimal. So you might want to put that constraint right in here. But it actually is more natural-- here, let me finish my statement here. The way we're going to verify optimality of this policy is by verifying that we're at a local minimum in J prime. I want to say that if I change x, If I change u, if I change p in any admissible way, then J is going to change. Small changes in here is not going to change this. I'm at a local minima in J prime. That's the minimum principle idea, right? I just want my-- if I'm at a minimum of function, the gradient is 0. In the Lagrange multiplier, the minimum of this augmented function, the gradient had to be 0. So if I change any of these, I want that to be-- that change to be 0. So it turns out that the more natural way to look at this bound in u is by not changing-- not allowing u to change outside of that regime. This is actually fairly procedural. So you end up doing this calculus of variations on J prime. But I actually-- I made a call earlier today. I think it's going to-- if I do it right now in the beginning in class, I'm going to lose you to-- I mean, I'm going to bore you and lose you. But it's in the notes, and it's clean. So I'm going to leave that hanging and let you look at it in the notes without typos that I might put up on the board, OK? Because I want to move on to the dynamic programming view of the world, sort of the other possible solution technique. OK. So today, we're going to do-- you can think of it as just solution technique 2 here. And it's based on dynamic programming. Now, the computer scientists in the audience say, I know dynamic programming. It's how I find the shortest path between point A and point B without reusing memory, and things like that. And you're exactly right. That's exactly what it is. It happens that the dynamic programming has a slightly bigger footprint in the world. There's a continuous form of dynamic programming. OK. So a graph search is a very discrete form of dynamic programming. So I'm going to start with sort of-- I'm actually going to work from the graph search sort of view of the world, but to make the continuous form that works for these continuous dynamical systems. And we're going to use this to investigate a different cost function, which is just this-- still subject to the dynamics, which in this case was the linear dynamics. OK. So before we worry about solving it, let's take a minute to decide if it's a reasonable cost function. It's different in a couple of ways. First of all, there's no hard limit on u. But I do penalize for u being away from 0. So it's sort of a softer penalty on u, not a hard limit. And then these terms are penalizing it from being-- the system from being away from the origin. And instead of going for some finite time and minimizing time, I'm going to go for an infinite horizon. So the only way to drive this thing, the only way, actually, for J to be a finite cost over this infinite integral, is if q and q dot get to 0, and you do u of 0 at 0. Otherwise, this thing's going to blow up. It's going to be an infinite integral. So the solution had better result in us getting to the origin, it turns out. But I'm not trying to explicitly minimize the time. I'm just penalizing it for being away, and I'm penalizing it for taking action. Now, what's the name of this type of control? Who knows is? I think-- yeah, LQR, right? So this is a Linear Quadratic Regulator. OK. It's a staple of-- it's sort of the best, most used result from optimal control. Everybody opens up Matlab and calls lqr. But you're going to understand it. Good. But to do LQR, to understand how that derivation works, we've got to do-- we're going to go through dynamic programming. AUDIENCE: Couldn't we use the same cost function there as well? RUSS TEDRAKE: Awesome. OK. So why don't I put that cost function down and just do Pontryagin's minimum principal? There's only one sort of subtle reason, which is that that's an infinite horizon cost. So I was going to say this at the end, but let's have this discussion now. So this is an infinite horizon. Pontryagin's is used to verify the optimality of some finite integral. So let's compare-- well, I know you know value-- the dynamic programming. So maybe let me say what dynamic programming is, and then I'll contrast them. Yeah. But the people sort of-- I just want to understand what happened. We got two different cost functions, two different solution techniques for now. And we're going to address in a few minutes why I did different solution techniques for the different cost functions. But I hope they both seem like sort of reasonable cost functions if I want to get my system to the origin. Different-- we're going to look at what the result is, the different results. And actually, something I want to leave you with is that you can, in fact, do lots of different combinations of these things. You could do quadratic costs and try to have some minimum time. There's lots and lots of ways to formulate these cost functions. These are two sort of examples, but you can do minimum time LQR, you can do all these things. OK. But with the way we're going to drive the LQR controller is by thinking about dynamic programming. And to do that, let me start with the discrete world, where people-- where it makes sense. So let's imagine I have a discrete time system. So x of n plus 1 is f of x n u n. And I have some cost function. Now remember, in the Pontryagin minimum principle, which shows that there's a sort of a general form that a lot of these cost functions take in the discrete form, it's h of x at capital N plus a sum instead of an integral of n equals 0 to N minus 1 g of x n u n. OK. Now, again, I said this sort of additive form of cost functions is pretty common. And you're going to see right now one of the reasons why. The great thing about having these costs that accumulate additively over the trajectory is that I can make a recursive form of this equation. So in particular, if I-- so I should call this, really, what I've been calling J, that's really the J of being at x 0 at time 0. And I can compute J of being at x 0 at time 0 and incurring the rest of the cost recursively by looking at what it would be like to be at some state x at time N-- and that in this case is just h of x of n-- and then thinking about what it would be like at-- to be at some J of x N minus 1-- and that's going to be g of x n minus 1 u of n minus 1 plus h of x n. Let me be even more careful. And I'm going to say, let's evaluate the cost of running a particular policy, u n is just some pi of J of x n. AUDIENCE: Sorry, why is the first x a 0, and then the rest of the x's [INAUDIBLE]?? RUSS TEDRAKE: OK. So why did I put x 0 here? That was intentional. I'm trying to make x 0 the variable that fits in here. Here x is the variable that fits in here. But you're right, I could be a little bit more-- I should be more careful. So now J, a function of this variable x at time N should really just be h of x. Yeah, good. So then this is-- I could say it this way. The other way I could say it is J x minus 1 equals x. Maybe that's the best way to rectify it. OK. And when I'm evaluating the cost of a particular policy, I'm going to use the notation J pi here, say this is the cost I should expect to receive given I'm in some state x. To make it even more satisfying, let's just be the same everywhere. This is x 0, and here I'll say x 0 equals my x. If I'm in some state x at time 0 executing policy pi, I'm going to incur this cost. If I'm at some state x at time N incurring this-- taking this policy, I'm going to get this. Here I'm going to get this. And even when I'm executing policy pi, I can even furthermore say that x n is f of x n minus 1 pi of x n minus 1. It's probably impossible to read in that corner. OK. So you can see where I'm going with it. It's pretty easy to see that J pi of x at some N is just the one-step cost g of x n u n plus the cost I expect to see given that x n plus 1 at time equals 1. OK. So the reason we like these integral costs or the sum of costs in the discrete time case is because I can do these recursive computations. And the same thing true if I look at-- if I define what the optimal cost is. So let's now define J star to be the cost I incur if I follow the optimal policy, which is pi star. Well, it turns out the same thing works. But now, there's an extra term here. OK. So it's easy to see that the cost of following a particular policy is recursive. It's more surprising that the cost to go of the optimal policy is equally recursive with a simple form like this, min over u. And this actually follows from something called the principle of optimality. Anybody see the principle of optimality before? OK. It says that if I want to be optimal over some trajectory, I'd better be optimal over-- from the end of that trajectory. So if I want to be optimal for the last-- it's from n minus 2 to the end, then I'd better be optimal from n minus 1 to the end. So it turns out if I act optimally in one step by doing this min over u, and then follow the policy of acting optimally for the rest of time, then that's optimal for the entire function, OK? OK. OK, good. So we've got a recursive form of this cost-to-go function that we exploited with the additive thing, the additive form. And now, the optimal policy comes straight out. The best thing to do, if you're in state x and a time n. is just the arg min over u of g x, u plus J star x, n plus 1 n plus 1 with that same x, n plus 1 defined by-- So in discrete time, optimal control is trivial. If you have an additive cost function, all you have to do is figure out what your cost is at the end, and then go back one step, do the thing that acts-- that in one step minimizes the cost and gets me to the lowest possible cost in the future. And if I just do that recursively backwards, I come up with the optimal policy that gets me from any x in n steps to the end. Does that make sense? Ask questions. Do people buy that? Is that obvious, or does that need more explanation? OK. Ask questions if you have them. All right. So we're going to use the discrete time form again when we get to the algorithms. But I'm trying to use it today to leapfrog into the continuous time conditions for optimality. So what happens if we now do the same sort of discrete time thinking, but do it in the limit where the time between steps goes to 0? So let me try to do the limiting argument to get us back to continuous time. OK. Now we've got our cost function, again, is h of x at capital T plus the integral from 0 to T of g x, u dt. The analogous statement from this recursion in the discrete time is that J x at t is going to be a limiting argument as dt goes to 0 of the min over u of g x, u dt plus J x of t plus dt t plus dt. OK. This is now-- that's just a limiting argument as dt goes to 0 of the same recursive statement. I'm going to approximate J x of t plus dt as-- this is J star let me not forget my stars-- as J star at x t plus partial J star partial x x dot dt plus partial J star partial t dt. It's a Taylor expansion of that term. OK. If I insert that back in, then I have J star x of t equals the limit as dt goes to 0 min over u g x, u dt plus partial J star partial x-- x dot is just f of x, u, remember-- dt plus partial J partial t dt. And I left off that J x there, because that actually doesn't depend on u. So I'm going to put that outside here, plus J x and t. Those guys cancel. And now I've got a dt everywhere. So I can actually take that out, and my limiting argument goes away. And what I'm left with, 0 equals min over u g of x, u plus partial J partial x star plus partial J partial t. This is a very famous equation, will be used a lot. It's called the Hamilton-Jacobi-Bellman equation. AUDIENCE: Russ. RUSS TEDRAKE: Yes? Did I miss-- AUDIENCE: x dot in the middle term there. RUSS TEDRAKE: Here? AUDIENCE: Last equation. That x dot [INAUDIBLE]. RUSS TEDRAKE: Oh, thanks. Good. This is f of x, u. Good. Thank you. Good, thank you. That is the Hamilton-Jacobi-Bellman equation, often known as the HJB. So Hamilton and Jacobi are really old guys. Bellman's a newer guy. He was in the '60s or something. A lot of people say Hamilton-Bellman-Jacobi. That doesn't seem quite right to me. That's some guy in the '60s sticking his name in between Hamilton and Jacobi. So I try to-- I will probably say HBJ a couple of times in the class, but whenever I'm thinking about it I say HJB, OK? OK. So we did a little bit of work in discrete time. But the absolute output of that thinking, the thing you need to remember, is this Hamilton-Jacobi-Bellman equation, OK? These turn out to be the conditions of optimality for continuous time. Let's think about what it means. So do you have yet a picture of this sort of what J is. J is a cost-to-go. It's a function over the entire landscape. It tells me if I'm in some state, how much cost am I going to incur with my cost function as it runs off into time. In the finite horizon case, it's just an integral to the end of time. In the infinite horizon case, I've started this initial condition, and I run my cost function forever. So j is a cost landscape, a cost-to-go landscape. This statement here says that, if I move a little bit in that landscape in x, scale by this x dot, then the thing I should incur is that is my instantaneous cost. OK. The way my cost landscape-- the difference of being in initial condition 1 versus being in initial condition 2, if they're neighboring, goes like the cost function. And there's the cost function-- the cost-to-go function lives in x, and it lives in time. OK. It's one of the most important equations we'll have-- Hamilton-Bellman-Jacobi equation. AUDIENCE: So we can take out the partial case that [INAUDIBLE]?? Because that one Is independent of u, the last term. So if we take that out, basically, the difference between the value to [INAUDIBLE] with respect to time, in this time and going to the next time that sort of seems like a TD error squared-- RUSS TEDRAKE: Oh, yeah. Yeah. Good. There's absolutely-- this is exactly the source of the TD error and the Bell-- yeah. It's exactly the Bellman equation. So yeah. So you're right. Partial J partial t could have been outside the min over u. It doesn't actually have u. But we're going to see all those connections as we get into the algorithms. But for-- this now is a tool for proving analytically and driving analytically some optimal controllers. We need one more-- we need to say something stronger about how useful that tool is. So there's the sufficiency theorem is what gives this guy teeth, OK? So I told you that the Pontryagin's minimum principle was a necessary condition for optimality. It wasn't necessarily sufficient. If you show that the system satisfies the Pontryagin's minimum principle, then you're close, but you actually also have to say it uniquely solves that, it's the only solution to that, solves the Pontryagin's minimum principle. So there's extra work needed. The theorem we're putting up here is this saying-- is going to say that if this equation is satisfied, then that's sufficient to guarantee that the policy is optimal. OK. So given a policy pi x of t, and a cost-to go function, J pi x of t, if pi is the argument of this, if pi is the policy which minimizes that for all x and all t, and that condition is met, then we can-- that's sufficient to give that J pi x of t equals J pi of x of t and pi x of t pi star x of t. OK. The proof of that I'm not even going to try. It's sort of tedious. It's in Bertsekas, if you like-- Bertsekas' book. But we're going to use this a lot. So if I can find some combination of J, pi, and pi that match that condition, then I've found an optimal policy. OK. Let's use this to solve the problem we want-- the linear quadratic regulator in its general form. So they've got a system x equals Ax plus Bu. And let's say I have a cost function J of x 0 is h of x, t-- the same thing I've been writing all day here-- g of x, u dt, where x 0 equals x, where h in general takes the form x transpose Qfx, and g takes the form x transpose Qx plus u transpose Ru. To make things-- to be careful, we're going to assume that-- we're going to enforce-- we're choosing the cost function. We're going to enforce that this is positive definite, making sure we don't get any negative cost here. And similarly-- actually, it only has to be semi-definite. Q transpose equals Q greater than or equal to 0 and R transpose equals R. That one does have to be positive. Definite OK. Here's a pretty general linear dynamical system, quadratic regulator cost. To satisfy the HBJ, we simply have to have that this condition-- so 0 equals min over u x transpose Qx plus u transpose Ru plus partial J partial x star times Ax plus Bu plus partial J star partial t, that had better equal 0. So I need to find that cost-to-go function which makes this thing 0. It turns out the solution to these things, we can just guess a form for J. Let's guess that J star x of t is also quadratic, again with a positive-- it's going to have to be positive. It could be-- in that case, partial J partial x is 2x transpose S of t. Partial J partial t is x transpose s dot t x. OK. Let's pop this guy in. I want to just crank through here. So does it make sense at all, that the J of x, t would be a quadratic form like that? Why is that a reasonable guess? Yeah. AUDIENCE: Because the final time [INAUDIBLE] match the [INAUDIBLE]. RUSS TEDRAKE: OK. So in the final time, that's a reasonable guess, because it started like this. Yeah. And it turns out-- I mean, we're actually going to see it by verification. But for the linear system, when I pump the cost backwards in time, this quadratic cost, it's going to have to stay quadratic. OK. So I've got 0 equals min over u x transpose Qx plus u transpose Ru plus 2x transpose S of t-- bless you-- times Ax plus Bu plus x transpose S t x. I need that whole thing to work out to be 0 for the minimizing u. So let's figure out what the minimizing u is now. Is it OK if I just sort of shorthand? I'll say the gradient of that whole thing in square brackets with respect to u here is going to be, what, 2Ru-- or u transpose R, I guess? We're going to try to be careful that this whole thing is a scalar. We're always talking about scalar costs. So I've got vectors and matrices going around, but the whole thing has to collapse to be a scalar. The gradient of a scalar with respect to a vector, I want it to always be a vector. The gradient of a vector with respect to a vector is going to be a matrix. So try to be careful about making-- that gradient better be a vector plus what's left here? 2x transpose S that guy there, right? But I have to take the transpose of that. So it's 2B transpose S of t. The S t transpose is not x-- I screwed up, sorry. It's still x transpose. I'm trying to-- x transpose S t B. That thing has to equal 0. And that's where I get my transpose back. So u star, the u that makes this gradient 0, is going to be-- those 2's cancel. It's going to be negative R inverse B transpose S transpose x. Which is important to realize that was actually-- it's equivalent to writing negative 1/2 R inverse B transpose partial J partial x transpose. OK. So what does this mean? So I've got some quadratic approximation of my value function. It's 0 at the origin always and forever. If I'm at the origin, I'm going to stay at the origin, my cost-to-go is 0. The exact shape of the quadratic bowl changes over time. The best thing to do is to go down to negative of the partial J partial x is trying to go down the cost-to-go function. I want to go down the cost-to-go function as fast as I can. But I'm going to wait-- I'm going to change, possibly, the exact direction. Rather than going straight down the cost-to-go function in x, I might orient myself a little bit depending on the weightings I put on-- the cost I put on the different u's. So I'm going to rotate that vector a little bit. This is what I can do, and this is the weighting I've done. So the best thing to do is to go down your cost-to-go function, get to the point where my cost-to-go is going to be as small as possible, filtered by the direction I can actually go and twisted by the way I penalize actions. OK. And it's sort of amazing, I think, that the whole thing works out to be just some linear feedback law negative Kx-- yet another reason [INAUDIBLE] to use that form. OK. Sorry, I should be a little careful. This is-- it depends on time. So it's K of t x. Why should it depend on time? This is a-- what's that? AUDIENCE: We switch. RUSS TEDRAKE: Because we switch what? AUDIENCE: The actuation. RUSS TEDRAKE: There's no hard switch in the actuation here. This is saying, I'm going to smoothly go down a value function. This one isn't the bang-bang controller. This turns out to be a smooth descent of some cost-to-go function. Yeah? AUDIENCE: The S t equals partial [INAUDIBLE].. RUSS TEDRAKE: I mean, S of t is time [INAUDIBLE] itself. AUDIENCE: Yeah, so it [INAUDIBLE].. RUSS TEDRAKE: So intuitively, why should I take a different linear control action if I'm at a time 1 versus time 2? AUDIENCE: Because you're time dependent. So if you're very close to the final time, you want to [INAUDIBLE] lots of control, because you don't have that much time [INAUDIBLE].. RUSS TEDRAKE: Awesome, yeah. This is a quirk of having a finite horizon cost function. In the infinite horizon case, it turns out you're going to just get a u equals negative Kx, where K is a variant of time. But in the time-- finite horizon problem, there's this quirk, which is the time ends at some point, and I have to deal with it. If the bank closes at 5:00, if I'm here and it's 4:50, and the bank closes at 5:00, I'm going to-- I'd better get over there faster than if it was 4:30 and the bank closes at 5:00. In my mind, actually, there's a lot of problems that are-- bank closing is a weird one, but there are a lot of problems that are naturally formulated as finite horizon problems. Things-- maybe a pick-and-place. The minimum time problem was a finite horizon, pick-and-place. There are a lot of problems which are naturally formulated as infinite horizon. I just want to walk as well as I possibly can for a very long time. I don't need to get to some place at a certain time. OK. But in many ways, the finite horizon time ones are the weird ones, because you always have to worry about the end of time approaching. OK. AUDIENCE: How do we get S t? RUSS TEDRAKE: How do we get S t? OK. Well, it's the thing that makes this equation 0. So what is that thing? I figured out what the minimizing u is. I can insert that back in. So I get now 0 equals Q plus x transpose-- I'm going to insert u in-- K-- or I'll do the whole thing, actually-- S of t B R inverse times R times R inverse. So I'm going to go ahead and cancel those out. B transpose S of t x. And the negative signs, because there's two u's there. The negative sign didn't get me. And then plus 2x transpose S of t Ax plus-- so minus B R inverse B transpose S of t x plus x transpose S dot of x. It turns out that this term here should be the same as that term there, modulo of factor 2. If you look, it's S, B, R inverse, B transpose, S. So this one actually, I can just turn that into a minus. OK. And it turns out that everything has this x transpose matrix x form. So I can actually-- in order for this thing to be true for all x, it must be that the matrix inside had better be 0. So it turns out to be 0 equals Q minus S t B R inverse B transpose S t plus 2 S t A plus S dot t had better be equal to 0. OK. Now, I made some assumptions to get here. Know what assumptions I made? The big one is that I guessed that form of the value function. And one of the things I guessed about it was that it was symmetric. So let's see if we're looking symmetric. So Q, we already said, was symmetric. That's all good. That guy's nice and symmetric. That's all good. So this is the one we have to worry about. Is that guy symmetric? It's actually not symmetric like that. But I can equivalently write it as S t A plus A transpose S t, since S is symmetric. And that guy is symmetric. I said a very strange thing. I just said that the matrices are-- this one is not symmetric, I can write the same thing as-- it's this. So what I mean to say is that these are equivalent for all x. Because this has got to equal this. OK. So, good. OK. So this equation, which I'm going to write one more time since it's an equation that has a name associated with someone famous-- deserves a box around it, I guess. So this is the Riccati equation. I'm going to move the S over to this side. It's a Riccati equation. And I also have that final condition that you rightly pointed out, where S of capital T had better equal Qf. So direct application of the Hamilton-Bellman-Jacobi equation, I was able to derive this Riccati equation, which gives me a solution for the value function. Because it gives me a final condition on an S and then the governing equation which integrates the equation backwards from capital T to 0. And once I have S, remember, we said that the u was just negative R inverse B transpose S of t x. So I've got everything. Once I have S, I have everything. OK. So this is one of the absolute fundamental results in optimal control. It turns out that if you want to know the infinite horizon solution to the-- if you look at the solution as time goes to infinity-- remember, I wrote my cost function initially was-- the problem we're trying to solve is an infinite integral. It turns out that the infinite solution is the steady-state solution of this equation. So if you integrate this equation back enough, it's stable. It finds a steady state where S dot is 0. And that solution when S dot equals 0, The S which solves this, that whole thing minus Q, is the infinite horizon solution. OK. If you open up Matlab, and you type lqr A, B, Q, R, then it's going to output two things. It outputs K, and it outputs S. Solving this thing is actually not trivial. So how do you solve that for S? The hard one is it's got this S in both places. But this is the Lyapunov equation again. It's so famous, it comes up so pervasively, that people have really good tools for solving it, numerical tools for solving it. So Matlab's got some nice routine in there to solve, to find S. And when I call lqr with the dynamics and the Q, R gives me exactly the infinite horizon S and infinite horizon non-time-variant K. If you need to do a finite horizon quadratic regulator, then you actually need to integrate these equations yourself. OK. I hate going that long with just equations and not intuition. So let me connect it back to the brick now. That was the point of doing everything in the brick world here. OK. So we've got Q double dot equals u. We've got now infinite horizon J x is infinite horizon g x, u dt, where I said g x, u was 1/2 Q squared plus 1/2 Q dot squared plus 1/2 u squared. So now that's exactly in the LQR form 0, 1, 0, 0. B is 0, 1. Q is the identity matrix. And R is 1. It turns out I can actually solve that one algebraically for S. If you pump all the symbols in-- I won't do it because there's a lot of symbols-- but in a few lines of algebra, you can figure out what S has to be, just because so many terms drop out with those 0's that actually there's-- There's the three equations and three unknowns. And it turns out that S has to be square root of 2, 1, 1, square root of 2. OK. The u, remember, was negative R inverse B transpose B transpose S x, which, if I punch those in, gives me 1 square root of 2 times x, which gives me closed loop dynamics of x dot equals Ax minus BKx is equal to 0, 1, negative 1, square root of 2 times x. OK. Now I'm going to plot two things here. First thing I'm going to plot is J of x. J of x is square root of 2, 1, 1, square root of 2. A little thinking about that, you'll see that it comes out to be an ellipsoid that is-- [INAUDIBLE]---- sort of shaped like this. I draw contours of that function, of that x transpose S x. And the cost-to-go is 0 here. And it's a bowl that comes up in this sort of elliptic bowl. All right. So what is the optimal policy going to look like, given that that's my bowl? We said the best thing to do is go down the steepest descent of the bowl. I want to go down-- wherever I am, I want to go down as fast as I can. But I can't do it exactly. That was actually sort of a-- that's OK. I mean, I can't do it exactly, because all I'm allowed to do is change-- I have one component that I'm not allowed to change, right? I have that my Q is going to go forward independent of u directly. So B transpose S x is going to be give me a projection of that gradient onto this-- the thing I can actually control, which way I can point my phase portrait in that given my control. And then R is going to scale it again. And the resulting closed loop dynamics, let's see if we can figure that out. So if I take the eigenvectors and eigenvalues that, well, it turns out I'm not going to make the plot. My eigenvalues were square root of 2 plus or minus i 1 over square root of 2 with v being 1 over square root of 2. So the best thing I can possibly do is to go down that-- if I didn't care about-- if I didn't worry about penalizing R, I didn't worry about my control actuation, would be to go straight down that bowl. But because I'm scaling things by-- I'm filtering things by wearing what I can actually control, and I'm penalizing things by R, the actual response is a complex response which goes down-- goes down this bowl and oscillates its way into the origin. OK, good. It was a little painful. But that is a set of tools that we're going to lean on when we're making all our algorithms. You've now seen a pretty representative sampling of what people can do analytically with optimal control. When you have a linear dynamical system, and there's a handful of cost functions which you can-- either by Pontryagin or dynamic programming, the Hamilton-Bellman-Jacobi sufficiency theorem, those are really the two big tools that are out there. In cases, especially for linear systems, you can analytically come up with optimal control policies and value functions. Why did we distinguish the two? Why did I use one in one place and the other in the other place? Well, it turns out the Hamilton-Bellman-Jacobi sufficiency theorem has in it these partial J partial x, partial J partial t. So it's only valid, actually, if partial J partial x is smooth. The policy we got from minimum time has this hard nonlinearity in the middle of it. It turns out that the value function that you have in the minimum time problem also has a hard nonlinearity in it. If I'm here versus here, it's smooth, but the gradients are not smooth. The gradient is discontinuous. So on this cusp, partial J partial x is undefined. So that's the only reason why I didn't lean on the sufficiency theorem completely. How did Pontryagin get around that? The sufficiency theorem is talking about-- it's looking at over-- roughly over the entire state space. It's looking at variations in the cost-to-go function as I move in x and in time. Pontryagin, if you remember, was along a particular trajectory. It was verifying that a particular trajectory was locally optimal. And it turns out in problems like this in these bang-bang problems, along a particular trajectory, my cost-to-go is smooth. The cost-to-go in the minimum time problem was just time, right? So the time I get-- the time it takes for me to go to here to here is just smoothly decreasing as I get closer like time. Along any trajectory, with these additive costs, the value function is going to be smooth. But along a non-system trajectory, some line like this, partial-- if I just look at J, how J varies over x, it's not smooth. So Pontryagin is a weaker statement. It's a statement about local stability along a trajectory. But it's valid in slightly larger domains, because it doesn't rely on value functions being smoothly differentiable. Now, for the first-order-- sorry, for the double integrator, the brick on ice, we could have just chosen our K's by hand and pushed them higher or smaller. We could do locus. We could figure out a pretty reasonable set of K's, of feedback gains, to make it stabilize to the goal. LQR gives us a different set of knobs that we could tune. Now we could more explicitly say what our concern is for getting to the goal by the Q matrix, versus what our concern is about using a lot of cost in the R matrix. So maybe that's not very compelling. Maybe we just did a lot of work to just have a slightly different set of knobs to turn when I'm designing my feedback controller. But what you're going to see is that, for much more complicated systems that are still linear-- or linearizations about very complicated systems, LQR is going to give you an explicit way to design these linear feedback controllers in a way that's optimal. So we're actually doing a variation of LQR now to make an airplane land on a perch, for instance. We can-- we're going to use it to stabilize the double-inverted pendulum, the Acrobot, around the top. So it's going to be a generally more useful tool. Down at the brick, double integrator level, you can think it's almost just a different set of ways to do your locus. OK. You have now, through two sort of dry lectures relative to the rest of the class, learned two ways to do analytical optimal control. One is by means of Pontryagin's minimum principle, one is by means of dynamic programming, which is through the HJB sufficiency theorem. And you've seen some representatives of what people can do with those analytical optimal control. And it got us far enough to make a brick go to the origin. Right. And it'll do a few more things, but. OK, so that's about as far as we get with analytics. We're going to use this in places to start algorithms up. But if we want to, for instance, solve the minimum time problem or the quadratic regulator problem, for the nonlinear dynamics of the pendulum, if I take my x dot equals Ax plus Bu away and give it the mgL sine theta, then most of these tools break down. Next Tuesday happens to be a holiday, virtual Monday. So we won't do it on next Tuesday. But next Thursday, I'm going to show you algorithms that are based on these. This is the important foundation that are going to solve algorithmically the same optimal control problems that we're-- more optimal control problems that we can solve analytically. And then the-- we'll go on from there to more and more complicated systems. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_16_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: OK, let me say the big idea for LQR trees again, and you can tell me where you want more details. So here's the basic story. I've got some goal I want to get to, and I've got a lot of potential-- I'd like to be able to get there from every initial condition, let's say. The idea was we know how to design-- we know how to stabilize trajectories. So let's just pick a point at random, design a path to the goal like that, design the LTV LQR stabilizer on this. The great thing about that is, first of all, that it will locally stabilize the trajectory, but second of all, because we can compute not only-- because we are given both u equals some time varying, we have a feedback policy from that, but we're also given an estimate of the cost to go, which is in this time varying matrix, yeah, the Riccati equation backwards. Because of that, when we're doing our LQR design, we actually have a good candidate for a Lyapunov function for the system around that trajectory. So that's an important observation, that when we do LTV LQR on a trajectory, we get both, OK. Now this thing, for the linear system, even the linear time varying system, this is a true Lyapunov function. From all initial states, this function will just only get smaller with time, OK. But since the actual system is non-linear, as I get further from my trajectory, some of the non-linearity is going to come in and corrupt my Lyapunov function. At some point, when I get far enough from the trajectory, those higher order terms are going to mean that this thing doesn't have a negative time derivative. But what I care about for a Lyapunov function is that this thing is less than or equal to 0. So the idea with the certificates is to do a higher order polynomial expansion of the dynamics along this trajectory and try to estimate the threshold where this stops being true, OK. And that gives me, essentially, a funnel. If I do it everywhere in time, that gives me a funnel along the trajectory over which I know the LQR cost to go is a Lyapunov function for the nonlinear system. And it's mostly conservative. The way that we construct that threshold is mostly conservative. The only weakness of it-- meaning the real basin of attraction should be bigger than this estimate. The only weakness is that I'm only doing it by doing a polynomial expansion of the system here. So if there was a hard discontinuity right next to the trajectory that didn't show up in the Taylor expansion, then I wouldn't ever see it. I'm not exhaustively searching the non-linearity around the trajectory. I'm just saying, along this trajectory, what are the higher order expansion? I do a Taylor expansion-- a third order, fourth order, whatever it takes, and I use that to check when it breaks the cost to go, OK. So that's the certificate. That's just, if I have a single trajectory, that I can use this. The real cool thing is that thanks to Pablo and [? Sasha ?] [? McGretzky, ?] we can do that efficiently with a convex optimization. And then the idea is, if I can do it for a single trajectory, then why not put that back into sort of an RRT kind of framework and try to build lots of trajectories that are stabilized? OK, so the first step was just to pick a point at random, design a single trajectory. The second step is, let's pick a new random point. I don't actually have to go all the way back to my goal. I just have to go to the nearest point on the current tree and stabilize that. And then I pick another point, find the nearest point on the tree, build the trajectory in like that, stabilize that. If I pick a new point and it's already in the basin of attraction, I don't need to add an edge. That would be a waste of my time. So the effect is I get these-- I didn't say carefully what the results are, because I can't prove them yet. This is still hot off the press. But I think that I can say that it probabilistically covers the reachable state space. Every place that I can get to the goal from, there exists a controller. If I design enough samples, I should be able to get to the goal, given I do enough steps in my LQR tree. So as time goes to infinity, the entire space will be covered with basin of attraction of a controller that gets me to the goal, which is a pretty powerful thing to say, if you're willing to wait till time goes to infinity. And the practical thing that's nice about it is it seems to happen pretty quick with a handful of-- with a fairly small number of trajectories. Yeah? [? John. ?] AUDIENCE: If you want to guarantee [INAUDIBLE] [? in your ?] system has that, which property would you have [? to let it ?] sample inside the basins of attraction? RUSS TEDRAKE: Yeah, so that's why I have to qualify the guarantees. I'm only saying the nonlinear system as represented as the Taylor expansion around the trajectories. That's the weakness. So it really only works for smooth systems. If there's a cliff here and I design a trajectory right here, it might come up with [? a ?] saying a basin of attraction's here, even though that's just not true. OK. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: If I smooth the cliff-- [? sorry, John. ?] If I smooth the cliff and therefore, when I Taylor expand here, I can see that there's a cliff there, then I'd like to think that the certificates would do the right thing. But if it's a hard cliff where the higher order expansion here doesn't see that cliff, then there's no hope. AUDIENCE: [? If ?] [? you ?] [INAUDIBLE] inside the basin, [? though, ?] couldn't you [INAUDIBLE]?? RUSS TEDRAKE: That's a good question. So it's just building more and more accurate certificates as time goes to infinity. It could be. It could be a good idea. My suspicion is that, in practice, this is going to be good enough for-- and we're not going to want to do the-- maybe I'm just the kind of guy that cuts the corners, but I think this is a pretty good solution. Ernesto. AUDIENCE: [INAUDIBLE] [? basin ?] [? of attraction ?] [INAUDIBLE]. RUSS TEDRAKE: So I use-- whenever I design each trajectory, I use [? DR ?] [? call. ?] Why not? But it's so I could say they're locally optimal in the branches that they are, just because that's easy to accomplish. But there's no sense in which it's globally optimal. So I just try to say that every state will eventually get there and not that it'll get there optimally. mm-hmm. AUDIENCE: But again, if you allowed it to sample inside the basins of attraction, you would probabilistically get some kind of convergence to global optimal. RUSS TEDRAKE: There are ways to try to get at that problem. I don't think it's a one-step change from what I said. I think you have to do something more. I mean, any one of these trajectories-- maybe that's getting busy. Any one of those trajectories is just locally optimal. So let's consider the case where the goal is here. There's something here, and I did this. And for some reason, I got a trajectory like that, which is locally optimal but not globally optimal, because there's a shorter path here. So I need to somehow-- I would need to somehow include the mechanism, which is, I think, what you're getting at, to eventually find a different path to the same place and overwrite the old path. And I don't have any mechanism for that inside. And it turns out to-- that's actually what I was going for initially. Turns out to be a little bit nontrivial to do. Chris Atkeson has a nice idea that goes towards doing that a little bit more on composing trajectory libraries. But I gave up optimality and tried to get coverage with a basin of attraction in that design. So some of you have been talking to me about the projects and have been asking about these things. So I think Mark and Matt are thinking about trying to help with this-- I mean, this is new. I mean, you guys can definitely help with the idea. So we're trying to figure out, for instance, if there's actuator limits. These guys are talking about trying to figure out how to compute the certificates, and even the LQR stabilizer, if you not only have a quadratic [INAUDIBLE] on u but have some hard limits on u, because that's a problem we actually hit in practice. So one of the final projects in the class, I think, is going to be helping with that. There's problems of, how do you design the stabilizer through impacts and in periodic systems. There's lots of good questions. So if you're excited about that idea-- I'm definitely excited about the idea right now-- then I'll see what you wrote up today, but we could maybe try to find a way to connect that. I'd be thrilled to have your help in making that idea more relevant. Yeah? OK. Does that cover? Now for something completely different, although related, of course. So in the LQR trees and basically in everything that we've done so far in class, we've made a handful of important assumptions. The most important one, probably, is we've always assumed that we have the model of the system. So next week we're going to get at, how do you do some of these optimizations when you don't even have a model, OK. We've also assumed so far that the current state of the system is known. In other words, the state of the robot is fully observable. If I stop assuming that, we start getting into discussions about state estimation, and we will towards the end of the class. And there's another assumption we've been making, which is that the dynamics are deterministic, let's say, that pretty much, the system goes where I think it's going to go. Now I chose to write that and then cross it out, because that's not quite what we're assuming, OK. I want to write that. But maybe what's a-- if we really were assuming that the dynamics were deterministic, then we wouldn't have been spending much time talking about feedback at all. We would have been just focused on open loop things. So let's think. Let's have a short philosophical argument about what we're actually doing, yeah? So we're not quite assuming that it's deterministic. Maybe a better way to describe what we're doing is we're assuming something specific about the disturbances in the system. We're assuming that disturbances look like a change in initial conditions. That's one way to say it, an un-modeled change in initial conditions. By virtue of talking about these feedback stabilizers, the motivation is that, OK, I'm following this trajectory. When the model's right, all is good. If something does happen, then OK, it's going to move me somewhere in state space. But as long as that's in the basin of attraction-- the whole idea of really a basin of attraction is going at the idea that I can handle disturbances by just being robust in state. But that's actually a subtle thing to assume. I want to be a little bit more explicit about what that means. Really, it sort of implies that we're assuming that disturbances are impulsive, not constant, instantaneous, Impulsive. If a disturbance for my walking robot was someone put a new weight on my leg, then that's not something that our designs so far have handled, because that's more like the model changed. We're talking about something that it moved me, and now I have to deal with it. So if the disturbance lasts for a long time, then that actually feels a lot more like a model change. But if it's impulsive, I can think of it as a change in initial condition. And the other thing that that implicitly assumes is that the disturbances are rare. If I got impulsive disturbances 1,000 times a second, then, again, my completely deterministic analysis isn't probably the relevant one, OK. So those are-- implicitly, we've been making these assumptions the whole time. The whole idea of talking about basins of attraction of deterministic systems and doing feedback design implicitly makes that assumption. So there's three things there, the fact that it's un-modeled and impulsive and rare, I guess. If your disturbances are not instantaneous rare disturbances, then I think I would advocate quickly for starting to think about your dynamics as not deterministic dynamics but as a stochastic dynamics, OK. And even if they are impulsive and rare, but if you have a model of them, then you should still be able to do better by explicitly letting your feedback design reason about the stochastic dynamics, OK. So today I want to start talking about-- I want to start breaking down our assumptions. And throughout the rest of the class, we're going to try to break these down. Let's start breaking down the assumption of deterministic dynamics, OK. AUDIENCE: [INAUDIBLE] [? or ?] talking about the LTV LQR. RUSS TEDRAKE: Mm-hmm. AUDIENCE: [INAUDIBLE] actually saw something, like if we the actually have this [? passive ?] transition model and we move somewhere, and then we're a little bit off from the trajectory, then you look at the policy, which would bring this back [INAUDIBLE] [? projected ?] [? and ?] go back. RUSS TEDRAKE: I think the answer to your question is yes. So one of the things we're going to do today is show that the-- it's sort of subtle that that's yes. But anybody who knows linear quadratic Gaussian control, yeah, well, it turns out that we're actually doing linear quadratic Gaussian control, but that's a surprising result, OK. So in the specific case of linear dynamics, quadratic cost, Gaussian noise, then what you said is true. So that's a special case that we'll see quickly. But in general, it's not true. Right, so and again, this is leading into next week. We're going to start talking about doing these optimal controlled derivations without any model. But today let's think about what happens if we have a stochastic model. OK, lots of ways people-- there's lots of different notations people use to talk about stochastic dynamics. The one I'll use is, I think, the most popular one. We still got our standard equations of motion, but we're going to add an additional input, which is some disturbance w as an additional input into our dynamics, where w of t is some noise process. OK, but if we let the noise come in through an input here, then we can still think about it as a deterministic function, our dynamics as a deterministic function, and keep it mostly the same as what we've been thinking about. Now some people don't like defining noise processes in continuous time. It's a little bit more natural to describe them in discrete time, and as we've done in the other class, let's do the discrete time case first. So in discrete time, we'll do the same thing. But now maybe we can-- it's easier to think about what w of n might be. So for instance, w of n might be some iid Gaussian process or something like that, which is just a complicated way to say that at each n, w n is sampled independently from a Gaussian. Let's say a normal distribution, like something like that. So when I'm simulating this in MATLAB, if I have-- I can simulate an iid Gaussian process by just every time step calling rand n, yeah, as if-- and it's independent from the [? w's ?] I picked before. That's a pretty good model. Now one thing I want to avoid talking about today so far is, let's still assume that we have perfect state information. So I don't want to worry about sensor noise yet. It turns out it'll be pretty natural to think about it with some of the same tools, but let's just assume for now that when I'm in state x, I know I'm in state x exactly. But if I'm trying to think about, into the future, what's going to happen, what's the optimal thing to do, I have to worry about the fact that the noise in the system is going to push me around. OK, so we updated our dynamic equation. We better start-- if we're thinking about this in an optimal control sense, we better also update our definition for optimality. So it used to be that we said J in the discrete time case was just some sum from n equals 0 to n of g xn un. The problem with that now is that xn, this is going to be-- this is now a random process, yeah? If I run from the same initial conditions with even the same open loop tape, let's say, the system five times, this is going to be a different value every time I run it. Exit time three will be different every time I run it, OK, which means J is also going to be a random variable, a random. So it doesn't quite make sense to say my notion of optimality is some random variable. We want to choose a property of that random variable that we care about. There's different schools-- again, being philosophical a little bit, there's different schools of thought in control theory. Some people say the thing you should care about, the only thing you should worry about is the worst case behavior. You want to worry about the tails of your distributions and make absolutely sure that my plane-- if I'm riding on a plane from here to California, it's not going to fall out of the sky with five nines of probability or something, OK. I actually don't subscribe-- OK, when I'm riding on a plane, I do subscribe to that, OK. But when I'm building robots that I don't have to put my life in jeopardy for, I don't believe in that. And I actually don't think animals do that. I think if animals were so conservative-- so that's the robust control approach, what I just described, worrying about the worst case. And the problem with robust control, the well documented, well discussed problem with robust control is it tends to come up with conservative control strategies. So if I'm worrying about never running into the table, then I'm never going to go anywhere near that table. There's another approach, which I prefer, that tends to be more common in the optimal control community, is to worry about maximizing the expected returns, OK. So now let's define J as the expected value of this cost function. There's a really obvious reason why the optimal control people choose to do that. It's because we already made a decision to do additive costs. Expectations play beautifully with summations, and so our life is going to stay clean and good if we're willing to do the expected value derivations, OK. But also philosophically, I think that me as an animal, I maximize my expected reward. Or a gazelle running through the field, I think, is maximizing expected reward. I mean, on average, it's doing spectacularly well. And every once in a while it wipes out and falls down and breaks its leg and gets eaten. But if it was worrying about that all the time, then it would never run as fast as it does. So-- or as aggressively as it does. So I think, personally, if you want to build aggressive robots, you've got to stop worrying about guarantees of stability, of performance. Just try to maximize the average performance. I think that's where I've put my chips. OK, so now, just to be clear here, so x-- again, every time I run it, even with an open loop u, if I completely control u, so u is not a random variable but some open loop tape, lets say, x will still be a random variable, OK. But J is not. J is saying that, given some control policy, some initial conditions, there is a well defined cost that I-- expected cost that I receive. And that's something I can optimize, OK. Does that make sense? OK, so I want to think about the implications of having these things turn into random variables with a simple example, OK. And that example I like, the one I like is a particle sitting in a bowl. Let's say some particle in a potential well. So let's say gravity is like this, and I've got some bowl I'm sitting in, and this particle is going to want to roll down that bowl, OK. But to make it interesting, we're going to say that this particle is subject to Brownian motion. Do you guys know what Brownian motion is? I guess-- was it-- I guess if you look down at a Petri dish of very small things and they don't sit still, there's debate about exactly the physical mechanisms of it, but phenomenologically, you can see very small cells that are not actively motile by themselves move around in a random fashion, doing random walks and things like this. So people-- I won't get into the philosophy of-- the philosophical debate of whether there exists stochasticity in the world, but I think stochasticity is certainly a relevant model for a lot of things we're doing here. And I don't want to get quantum in class. OK, but let's just say that this guy is subject to the dynamics of this bowl. But on top of that dynamics of this bowl, it has a tendency to jitter around a little bit, OK. So I'll write down the dynamics. I'll keep it discrete for now. So let's say at every time step, the difference-- the update looks like the gradient of that bowl it's trying to go down that bowl plus some random noise. I'll call it z of n. And this, again, I'll assume is iid, Independent Identically Distributed Gaussian noise. So if you've never taken a class with all this random variables and everything, I hope most of this will still come through. I'll throw a few of these words and try to be soft about it. If you have questions about any of these, just ask me. I think that we're going to be able to say things that are fairly mechanically intuitive and helpful. So I hope it's accessible to everybody. And if it's not, ask me. OK, so this is a reasonable dynamical system now. It's attempting to, on each update, having gone down the gradient plus some random noise, OK. Let's make our lives easier by choosing u of x to be-- how about that? That's pretty nice, right? It makes our life good if everything is quadratic. Then this thing turns out to be negative alpha x. OK, so just to make sure we're thinking about it, if the noise is 0, then what's going to happen in this system? I've got a discrete time system, which, if I move that back over to the other side like we're accustomed to, it goes like this. So how does that thing behave? Where's the fixed point? At 0. And when is it stable? When's it stable? Discrete time linear system. AUDIENCE: [INAUDIBLE] equals 0 [? of 1 ?] but [INAUDIBLE].. RUSS TEDRAKE: Yeah. AUDIENCE: [INAUDIBLE] [? positive ?] [? x? ?] RUSS TEDRAKE: No, you're thinking too continuous time. In discrete time, your bounds on your eigenvalues are between-- the absolute value has to be less than 1, which I think, in this case, means alpha has got to be between 0 and 2, yeah? Everybody OK with that? Yeah. That wasn't supposed to be the big insight for the lecture, but. OK, so the reason I wanted to say that-- OK, so we definitely have some stable dynamics pushing us this way in this bowl. The picture tells you that. The math tells you that. We have some stable dynamics that's pushing us towards here, and then we have something that's pushing us out, which is that noise. If I was-- maybe just as a thought exercise, if my bowl had been flat, if alpha was 0, and I've got this thing subject to Brownian motion, then if I look at it at time 100, where's it going to be? I mean, it could go sort of anywhere, yeah? If it's in this bowl and it's subject to Brownian motion, then where's it going to be at time 100? AUDIENCE: [INAUDIBLE] close to [INAUDIBLE].. RUSS TEDRAKE: You'd expect it, with high probability, to be around here. There's a chance-- I mean, even if it turns-- even with small noise, if I were to get some abnormally rare large force in this direction 10 times in a row, if I watched this Gaussian process long enough, I might look and find it here. But that's going to be very low probability. So what I'd expect to find is if I watch it for some amount of time, and I look at it, and time is 100, that it's probably going to be here. Going to draw some probability distribution here. And sure, there's some tails here that'll say if I looked, maybe I'll find it there, but that's pretty unlikely, OK. So hopefully when we're all done with this, we're going to get that out. And it's not too hard to see it, actually. So what's the best way to say it? So let's pretend that we're going back to our-- so this was the-- we're back to the noise case again, putting epsilon back in. Can we compute, then-- so if I know where I am at time n-- if my sensors are perfect, like I said, I know where I am at time n, where am I going to be a time n plus 1? So let me write that as some probability distribution that-- I want to write, where am I at time n plus 1 given I know where I am at time n? What's that going to look like? AUDIENCE: [INAUDIBLE] Gaussian function [INAUDIBLE] 1 minus [INAUDIBLE] n by some variance, which is kind of [? problematic, ?] actually. RUSS TEDRAKE: Awesome. Right? So probably, I'm going to be 1 minus alpha xn away from where I was, but then some Gaussian distribution centered around that point. The deterministic part puts me here, and this part then adds noise to that, OK. So that's going to be my full Gaussian here, 2 pi sigma squared e to the negative xn plus 1 minus 1 minus alpha xn, that whole thing squared, all over to sigma squared, yeah? You agree with that? If I know where I am at time xn, which, by the way, is equivalent in probability space to saying I have a delta function here-- I know where I am. My probability distribution is a delta function. Then on the next step, I'm going to be 1 minus alpha times that, and I'm going to have a distribution, which is mean of this. This is a Gaussian distribution x minus mu over 2 sigma squared, squared. This is the new x. This is mu. Yeah? Good. So now we have everything we need, really, to proceed. So let's say I knew where I was at x0. On x1, I'm going to be at the function described by this. Where am I going to be at x2? Well, in general, I have to do the update. And let me use the notation P of n plus 1, meaning the probability distribution over x at time n plus 1. So think of it as a different function at each discrete time. Notationally, it's the cleanest, I think. Well, that's going to be-- it's going to be-- I have to think about-- oops, I did-- sorry, this is a y. Yeah. My fault. So if I was-- I have to think about all the possibilities. I want the probability that I was in y equals 0 and then the probability of being in x, considering I was in y0. But also, I have to think about, what if y was 1? Well, what's the probability of that? And I'm going to sum the whole thing up in a continuous way, and I'm going to get my new probability of being at the new place. You guys OK with this equation? All right, I have to consider all possible cases of where I was at time n-- that's given by this-- and then apply my dynamics, which was given by this, to get my new distribution, OK. Hope that's OK, but even if it's not, it still should be-- you'll still be OK here in a second. So we can do that. So let's say that P of n y is a delta function. That's what I said. So I know my initial conditions. I should have drawn it right here to match that plot. Well, then after one step, it's going to just pick out the P of this Gaussian centered at that y. And at one step, then, I'm going to be at a Gaussian distribution 1 minus alpha from where I was, centered around 1 minus alpha from where I was. Now in the next step, I have to consider the fact that I could be anywhere in that Gaussian, weighted appropriately. And I have to consider all of the updates from those. That's what this integral is doing. And it turns out the magic of Gaussians and linearity is that if P of y is a Gaussian, and I multiply it by another Gaussian and integrate, I get out a Gaussian. Yeah. Life is good. So I'll leave the actual math to your-- eh, what the heck. I'll just-- what you get-- I can write the answer-- if you push a Gaussian through, turns out to be 1 over square root of 2 pi sigma squared integral from negative infinity to infinity of e to the negative-- let me actually write the-- skip that one line just to keep things moving. Let's do 1 over 1 minus alpha square root of 2 pi sigma squared. I have the probability of n at y prime minus 1 alpha dy alpha, where y is 1 minus alpha y. It's just-- I haven't done a lot of work yet. I just changed coordinates into y prime. And it turns out, for instance, if I look for a-- if I guess that the steady state is a Gaussian form, and I look for a place where this and this can possibly be the same function, if I want to look at the steady state of this dynamics, then I find that P star of x is 1 over-- oh-- square root of 2 pi sigma 0 squared e to the negative x squared 2 sigma 0 squared. It's a Gaussian. It's actually a mean 0 Gaussian, is the steady state of that update, OK, which is just a few lines in the notes, where sigma 0 squared is sigma squared, which is the noise from the Brownian motion minus alpha squared. OK? So I think, actually, these equations tell the entire story. If I start my system even from some-- well, specifically, if I start my system in a delta function or in a Gaussian distribution of initial conditions, then I'm going to be Gaussian for the rest of time. Turns out even if it's not, it'll go to a stable Gaussian. And if I watch-- if I look far enough into the future, at the steady state of that probability distribution, then it's actually going to be what we hoped we'd find. It's a mean 0 Gaussian. This was 0 in my plot. And its width, its variance is given by two competing terms. You've got the noise from the Brownian motion trying to push you out into larger variance, and you've got the competing force of the stability of the dynamics pushing you back in, OK. So if alpha gets-- and you actually-- this also is valid exactly when alpha's between 0 and 2. Doesn't go to 0 in that regime yet. OK. So if I want to look at that particle at time 100 or time 10,000, then I should expect to see it somewhere with probability given by this distribution in the vicinity of that 0, OK. And that comes out of simple math of pushing this Gaussian through this equation, OK. Now you all probably-- most of you probably knew that before. Why do you know this before? Maybe not in this level of detail, but why do many of you know this already? It's a Kalman filter, right? The Kalman filtered forward process takes a Gaussian, shoves it through a linear system, stays Gaussian. This is the single variable, little more careful version maybe than you'd do if you call MATLAB's Kalman stuff. But yeah, so that's not a surprising result. But I think it forces you to think about a couple things. So like I said, the stability of the system is critical in determining that final distribution. But even more significantly than that, there's some implications of having noise in the system. Implications of having stochastic dynamics, OK. If you want to reason about the cost that you're going to incur, given some policy, for instance, in order to reason about the future dynamics, even though you know current state, so even given initial conditions, you have to reason about-- it's not enough to just reason about x. You have to reason about that entire distribution, what I called Pn of x, not just x. OK, so as soon as we start doing stochastic stuff, we have to change our view of the world. The state x is not the only thing you care about moving forward. You care about the probability distribution of states that you live in. What would you say about the stability of this system? If I asked you, is that a stable system, what would you say? AUDIENCE: [INAUDIBLE] [? stable it ?] [? is ?] [? or something? ?] RUSS TEDRAKE: I'm asking-- and we're going to do that, but I'm asking you for your intuition. What would you guess? Would you feel comfortable if I said, it's a stable system, let's move on? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: In some ways, it's OK, because the distribution is stable. x is not stable, OK. If I look at a-- if I'm at-- there's no fixed point in x. The noise is going to keep moving me around, OK. But I told you that P of x, actually-- well, I only told you it's a fixed point, but I can tell you it's a stable fixed point. P of x is actually stable, OK. OK, so basically, this equation right here, this update right here, it's a very famous, important thing. Well, so important that it's called the master equation, yeah. I mean, don't just name it after some guy. Call it the master equation. And there are various versions of the master equation and specific problems that are named after people's last names, but in general, it's the master equation, OK. So you can't forget how significant it is. And the idea is that in the master equation, you're looking at the dynamics of the probability distribution. Not the dynamics of a single state, the dynamics of a probability distribution, OK. So that probability distribution, actually, in the master equation, is stable, OK. Now there are more complicated cases, actually. So what about this one? What's this thing going to do in the long run? AUDIENCE: Bimodal distribution. RUSS TEDRAKE: It's going to have a bimodal distribution in the long run. So it would be inappropriate to say either of those points-- I mean, that either of those points are fixed points. For the deterministic case, they are. For the stochastic case, they're not. And you're right. It's going to go to some distribution like this, OK. I have a decision point, which of my 10 pages I should do. I could talk about some examples of just stochastic dynamics, for instance, on walking robots, or I could get to the optimal control of the more simple systems. I have to set John up for next week, so I guess I got to just tell you that, actually, we've done some work thinking about, for instance, the rimless wheel-- I'll just tell you the setup, and I won't tell you all the details. So here's a realistic example of that sort of dynamics. Take your rimless wheel, OK, dynamics. And let's say, instead of walking down some constant ramp, let's say on every step, the ramp angle is drawn from some distribution. OK, so this is passive walking on rough terrain, mm-hmm. And it turns out it's not following a limit cycle anymore. But it's always-- its long-term probability distribution is-- well, there's a slightly more complicated story. If I look long enough, this thing has an absorbing state. So if I take a big enough step, then eventually, I'm going to lose enough energy. Remember, the deterministic system had two fixed points. One was standing still. The other one was rolling at a constant speed, OK. The standing still fixed point on rough terrain is an absorbing fixed point. If I get there and I never take another step, then I'm never going to leave that fixed point. So that is actually a true fixed point, yeah. The rolling fixed point, you're going to tend to bounce around that fixed point. So maybe this picture is something more like this, yeah. OK? So the standing still fixed point, if I get in there, it's absorbing. I'm never coming out. But the rolling fixed point, you tend to bounce around this limit cycle, OK. And then every once in a while, in the stochastic dynamics case, they say that these particles make an escape attempt-- that's what they call it. OK-- and maybe shoot over and fall down, OK. And it's really very beautiful. If you watch the probability distributions as they propagate through the rimless wheel equations, or the compass gait equations or you name it, then what you get is you get this probability mass around here. For a long time, it's pretty likely that I'm in the vicinity of that limit cycle. And then slowly, as the time goes on, the escape attempts continue to the point where this thing gets smaller and smaller until, as time goes to infinity, I'm only going to be standing still, OK. So the negative-- the pessimistic view of that work is to say that you're always going to fall down. You can build the best robot you want, but if you have a reasonable model of the dynamics, it's always going to fall down. If you wait long enough, a Mack truck is going to come along and hit it or it's going to walk into a door or something like that. You can do the best you want, but it's going to fall down, and it'll end up on YouTube, probably, right? [LAUGHTER] So-- AUDIENCE: So you're assuming your ramp distribution is actually Gaussian? RUSS TEDRAKE: That's what we decided, yeah. AUDIENCE: OK. RUSS TEDRAKE: But that doesn't actually imply that the posterior is Gaussian, because it's going through nonlinear dynamics. AUDIENCE: Right, but I mean, in reality, the random distribution is never going to be truly Gaussian, right? Because-- RUSS TEDRAKE: Everything's Gaussian if you get enough of-- I don't know. I mean, I think that's-- so you want to do stairs or something more specific? AUDIENCE: Oh, well, I'm just saying, if there is a hard limit on how steep your ramp is-- RUSS TEDRAKE: Oh. AUDIENCE: --you could guarantee that-- RUSS TEDRAKE: Good. Very good point. All right, so of the distributions didn't have tails, then there are cases where I can bound it never going over. But those tails actually have to be pretty steep-- the limitations have to be pretty steep, because you have to make sure that, on a single step, of the damping overcomes the biggest possible perturbation. If on a single-- if your noise can ever be bigger than what you can take out in a single step, then you will eventually, as time goes to infinity, find a way to get out. Yeah? So Katie Mill did some nice work in quantifying the metastable dynamics of walking. And actually, I think that-- so we call it the metastable, because that distribution is long-living. It still makes sense to talk about where you'd expect this thing to be while it's walking. But eventually, we have to admit it's going to be-- it's going to go to falling down. Like a diamond is a diamond for a very long time, but eventually, it'll turn back into graphite. OK, so good. So there's actually a beautiful-- even if you don't care about control, I think there's actually beautiful things that happen in stochastic dynamics. But the thing that matters here is, we've switched hats. We've now started thinking about probability distributions and how they evolve with dynamics, and how we can change those probability distributions with control. If I could if I could control the shape of that, then I can control those probability distributions, for instance. OK. So it turns out it's sort of trivial to work stochastic-- to solve stochastic optimal control problems, at least with dynamic programming, OK. And it works out, because of this additive cost structure, that it's roughly no more expensive to solve the stochastic optimal control than the deterministic one. And that matters. Maybe I should even make the point that it matters. So if I have a stochastic process-- and in general, the optimal policy that you get from stochastic optimal control is going to be different than the one you get from deterministic optimal control, so potentially, in dramatic ways. Let me try to make that point here. So imagine I've got my-- I've got a trashcan robot. I shouldn't call it a trashcan. I've got a-- what are they? What are the names of those little red robots? Pioneer robot or something like this, yeah? And I want to get it from-- to this goal. Let's say I've got a cost function like this, and I start over here. And as I go, I know that my wheels slip or something like that. My distributions are going to grow as I go, yeah. And I've got some ability to control them, so they're not going to grow unbounded, but let's say they're going to grow in my path to the goal, OK. There'll be two competing forces. There'll be my ability to measure and fight against disturbances, and there'll be the inevitable disturbances. And those two will again combine into some sort of distribution over time, OK. Now imagine-- like the scenario we talked about in the feedback case, imagine my cost function is 0 everywhere, negative 1 at the goal-- I want to get to the goal-- and say something really big here, yeah. There's pits of fire in the middle of the lab. OK? No, I mean, right, so we've got to make the point. If it was just 1, I wouldn't make-- be as dramatic. But OK, so long story short, a stochastic optimal control solution is unlikely to choose this path, because 0-- even if the distributions are fairly tight, 0 times a big part of the probability distribution plus 1 e to the 6 times even a little part of the probability distribution is still a big number, OK. And so therefore, the expected value of going through here is that I'm going to incur quite a bit of cost. Does that make sense? So if I just did deterministic optimal control, we talked about using feedback to try to motivate not going through there. But really, the more direct way is to think about the probability distributions. So if I can control my probabilities to the point where I know 0 probability is going to be in here, then sure, go ahead through there. And the deterministic one will probably find that. But the stochastic one, if it realizes there's something, will probably try to find a different path, OK. So that's one example. But in general, the stochastic optimal control policies are going to be different than the deterministic ones, and better. If you have a reasonable model of the disturbances you'd expect to encounter, then you should allow your optimal control tools to think about them, OK. AUDIENCE: [INAUDIBLE] stochastic environment s times 4 [INAUDIBLE] [? you have ?] [INAUDIBLE] [? more states, ?] because potentially, each action can move you to any of the possible states while in deterministic case, you can go to one state. So when you do [INAUDIBLE] [? worse ?] [? case. ?] RUSS TEDRAKE: Right. I knew someone was going to-- so I would say essentially no. I almost wrote, essentially no. And the reason I want to make that comparison, actually, is because I want to compare it directly to the barycentric interpolation that we were doing before, which is what I'm going to do in a second. And that is already doing an interpolation, already going through some probability, some transition matrix, yeah. And it's probably true that the-- it may be true, depending on your noise distributions, that if you add a lot of noise, that transition matrix will be more dense, and therefore, it might take more time, depending on how you computed. My MATLAB implementation, it's the same. Yeah? Is that fair? And it's no more complicated to write down. How about that? OK? We'll completely understand what [INAUDIBLE] was asking in just a second. OK, so why is it no more complicated for me to write down on the board the stochastic optimal control case in dynamic programming? Remember, I said now this is-- I'm going to take the expected value of my additive cost. First let's just think about what the implications are for doing optimal control. So first of all, I can take that expectation inside. And now what's the probability of g-- or what's the expected value of g at xn un? Well, x has got some distribution given by P of x, and yeah. So this thing is going to work out to be-- you can always take the expected value of a function of x by just that function times its distribution integrated. This thing's going to work out to be so an integral over all possible states of g of x u of n times P of n x dx. Right? OK, so you could imagine computing optimal policies by figuring out the state distribution by that evolution I was talking about before and then integrating over the possible states, yeah? The costs for each state, and figuring out our J, figuring out a way to minimize that. I only wrote that down to make it look hard, OK. It turns out, again, just like before, the recursive form is beautiful and simple, OK. So you can imagine doing it that way, and that's correct, but just like before, the dynamic programming solution exploits the recursive-- the additive form and does a recursive solution which just works out beautifully, OK. So if I do J of x from time 0 being the expected value of, let's do the final cost also, then what's J of x capital N? My cost to go, given I'm at the goal. The time has expired, and I'm at state x. AUDIENCE: [INAUDIBLE] h of x. RUSS TEDRAKE: Is it expected value of h of x, or is it just h? What is it? AUDIENCE: h is [? deterministic ?] [INAUDIBLE]. RUSS TEDRAKE: Awesome. Yeah. If I know I'm already in x, then there's no probabilities left, yeah. OK, and then if I go backwards, J of x at time k is going to work out to be min over u-- I should say J star of x. Sorry. J star of x is going to be min over u the expected value of g x, u plus J star of f of x u w of n k plus 1. Can you buy that? OK, so the reinforcement learning people always like to say that the reward or the cost can also be a random process a random variable. I'm always in this case where I design the cost, it's a function of some random x, but g is deterministic. So actually, I could take that expectation right inside here, and I just have to do a min over u of g of x times the expected value of my cost to go, OK. The nicest way to see how to implement that is let's go ahead and-- we've already discretized time. Let's discretize state and actions. So now I have S of n plus 1-- remember I switch to S's and a's when I discretize things-- is now f of S n times some action times my noise. And the advantage of discretizing stat and actions is now I can do P of n plus 1, which is a function of S. I could think of that as just a vector where the ith element is the probability that S of n plus 1 equals Si. Is that OK? The probability distribution, remember, in general, was a function. In the particle in a bowl case, it was a continuous Gaussian function. If I discretize the state there, then I can represent that as a vector, saying, what's the probability of being in state one, what's the probability of being in state two, probability of being in state three, and so on, OK. So the reason to discretize states is I can turn my continuous function into a vector. Yeah. And I can turn this function into a transition probability matrix. I can-- so f goes to Tij, which is the probability of landing in Sj-- I should-- it depends on the actions-- given I was in Si and I took action a. This is a matrix. It's the transition matrix. And now the state distribution dynamics are going to just turn out to be a pretty simple matrix equation. That's in-- oops. Tij Pj at time n plus 1. Yeah, so let me actually write the whole vec-- the real vector form. That's really for-- that's for a single element of it. I could just write, if I'm doing it in column vectors, then it's actually going to be P of n times T of a, OK. Where these two are vectors, that's a matrix. So and this is the discreet time, discreet action, discreet state master equation. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: I use it as a column vector. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Let's make sure. So the probability of being in state J given I was in state i should be the sum over-- write this thing. Probability of being of S n plus 1 is a probability-- what's that? I think I had it right, right? It's not a true loop, but I think that's right. AUDIENCE: [? Don't ?] [? you ?] [? need a ?] transpose? AUDIENCE: Yeah. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: I need a T transpose? AUDIENCE: Get a T transpose. AUDIENCE: You just can't hit the-- AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Oh, of course, because I did this. OK, yeah, so sorry. Yeah, I think the way I've got T defined-- but thank you. That should be a transpose, yeah. Good. AUDIENCE: [INAUDIBLE] T transpose on the other side [INAUDIBLE]. RUSS TEDRAKE: In which case, I could have written it with the T as a transpose too, but if I transpose the whole thing. Doesn't really matter if you like row vectors or column vectors. The point is the master equation, which looked a little scary before, yeah, turns into a simple matrix update in the discrete time, discrete state case. Yeah? AUDIENCE: So in this [INAUDIBLE] [? truncated ?] [? the actions ?] or you'd just not [? capture things? ?] Because you have hard limits if you discretize things. RUSS TEDRAKE: Good. So there's a question again, just like we had the question when we did dynamic programming for the value iteration, of how you go from the continuous probabilities and continuous states back to the other one. So again, yeah. So I would sample for my Gaussian and fill out a transition probabilities as a close, truncated representation of the Gaussian and still interpolate with the barycentric interpolators. And now the DP update works out to be just a simple. The probability of being in S-- let's see S-- J is the min over a. The expected value from this equation can be taken care of with just the transition matrix over here. And I get maybe my vector g of a, which is the cost of being in each state given I took that action, sum over j Tij Sj J of Sj k plus 1. Yeah? And I get rid of my expected values by just using that-- working directly with the transition matrices. i on this side, j on the side. Many apologies. This turns out to be exactly-- the reason I said, basically no more expensive to solve than the deterministic case is we already used this form when we were doing the barycentric interpolation. Because our problem in the dynamic programming originally was that when we started simulating this thing from one node forward, it didn't end up-- unless you were very, very lucky, it didn't end up right on top of another node. So we already had said that we're going to estimate the new cost to go as an interpolation of the neighboring points, of the value of the neighboring points, where that weighting came from the barycentric interpolators, OK. We're doing the exact same thing now. In fact, you could actually think of the barycentric interpolators as turning your deterministic problem into a stochastic problem, where the probability of going into each of these neighbors is the interpolant, OK. So the reason the barycentric works beautifully is that it turns the deterministic case into a stochastic case. Yeah? And that's why I wanted to say that it's no more complex, OK. T might be more-- have less 0s than in the general case. If maybe-- with some probability distribution, it might be that I have to worry about hitting a lot more nodes. So it might take few more cycles. But the equations are the same, and my MATLAB code is the same, OK. Simultaneously, or maybe conversely, this helps-- actually tells you why we have problems with the barycentric interpolators. This is the-- remember, the fundamental problems with the barycentric interpolators is that things leaked away from the nominal trajectories, and we had our chattering happening in the-- and our bang bang solution wasn't quite right. Because now you can think of it as, is my deterministic problem assigning some probability of going to each of these neighbors? And you can see that that distribution's going to start slipping away from the nominal trajectory. OK, excellent. So this is actually very important. Stochastic optimal control is a beautiful thing. If I can model the disturbances in any reasonable way, then I can get better policies by explicitly reasoning about them. And just like we said dynamic programming for low dimensional problems solves all these really hard problems that are analytically intractable and things like that, it can even solve a stochastic problem with almost no more work, OK. The low dimensional problems, even complicated ones with complicated distributions, DP can do the work for you. OK. So a few more things to say. There's one particular result, which we already mentioned early, that I have to mention here. This dynamic programming update, we use this in our analytical optimal control, too. We use this as the basis to start designing things like our LQR controllers that we turned it into the HPJ. And in the finite time case, we didn't even turn it into the HPJ. We just started-- we started-- we can back this out with dynamic programming, OK. So we can actually use the same thing to analytically try to design some controllers for stochastic optimal control cases. And just like in the deterministic case, there's one outstanding result that everybody knows and uses, and that's the linear quadratic regulator with Gaussian noise. LQG is the shorthand. There's two forms of it. One of them is Gaussian noise also on the sensors, but let's just worry about the case where we know there's no uncertainty in the sensors, only the dynamic noise. So x n plus 1 is a. It could be a of n. It could be time varying or not. xn plus B n u of n plus wn. Cost function, again, is the quadratic regulator. What do you think's going to happen with that problem? Someone who hasn't used it extensively in your work, what's going to change about our LQR solution? Think about stabilizing the pendulum or something, OK. Let's say we're doing optimal control on the simple pendulum linearized around the top, and now there's disturbances bouncing me around. How would you act differently given some model of disturbances in the linear case? How would you act differently if you know that somebody's going to be bumping me around with a mean 0? Let's keep w mean 0. How would you act differently if you're a simple pendulum around the top? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Anybody else want to weigh in? AUDIENCE: Increase your gain. RUSS TEDRAKE: What's that? AUDIENCE: Increase your gain. RUSS TEDRAKE: You might increase your gain or something like that, you'd think. Yeah, it turns out you wouldn't act different. It's one of the most surprising results, I think, of stochastic optimal control. It turns out that you work it all out, you put your costs like this, you wouldn't-- you don't turn your gains up because of the disturbances or anything like that. The optimal solution for the stochastic case is the same policy as the optimal solution for the deterministic case, OK. It's also true in continuous time. R inverse B transpose S of n x. Yeah, it's the same B of n here. Did I write something funny? AUDIENCE: Did you know that that clock isn't moving? RUSS TEDRAKE: No, I didn't. Am I way off time? AUDIENCE: It's about 4 o'clock. RUSS TEDRAKE: OK. Thank you for telling me that. I thought I had time. Nice. AUDIENCE: Is the S the same? RUSS TEDRAKE: This S is the same too. Oh, sorry, sorry. Good. The S I wrote here is the same, OK. That is the same S that you get from the Riccati equation, but the total cost is bigger in the-- so the policy is the same, but-- so how much time do I really have? Do I have negative time already, or am I-- AUDIENCE: Yeah, kind of. RUSS TEDRAKE: Sorry. OK. But J of n-- I thought it was just very exciting-- is this plus some expected value of the noise. I was going to keep going for a long time, probably. OK. So it's the same S that you had before, but the cost to go that you get is actually higher, in a way that you might expect with-- it actually depends on the cost on the other S. And so the S of n comes from the deterministic Riccati equation, but the cost to go gets bigger. Yeah? So that's one of the most surprising, I think, results from stochastic optimal control, is that in one case, it tells you it's OK to do deterministic optimal control, yeah. In most cases, it's not OK. It won't give you the same thing. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_2_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Welcome back. If all goes well, we'll be joined in a few minutes by a simple pendulum, a little robot that we're going to demonstrate everything. It turns out we had a little license issues in the last minute, but we're hoping to bring down and actually play with a real pendulum today. The goal of today is fairly modest. We're just going to think about pendula, a simple pendulum. When I say a simple pendulum, I mean the mass is concentrated at the endpoint. Typically, we assume that the inertia of the rod is negligible. In many cases, I'll write the inertia in there just in case. OK. So why should we spend an entire lecture on a simple pendulum? Right? It seems boring. Well, I think you could argue that if you know what the simple pendulum does, and you know what it does when it's got complicated interaction forces, then everything, because most of our robots are just a bunch of pendula. But more important, I think, is that the pendulum is simple enough that we can pretty much completely understand it in a single lecture. And it's going to be an opportunity for me to introduce basically all of the topics I want to introduce in terms of nonlinear dynamics and the basic definitions that we're going to use throughout the class, and I can plot everything about it. So actually, even in research, when we're testing out new algorithms, we almost always spend a lot of time thinking about how it works on the simple pendulum. OK? It's so simple, but it's a staple. OK. So what are the dynamics of the simple pendulum? I told you how to do the Lagrangian dynamics quickly yesterday, and there's a more worked-out example in your notes. If you pop in the Lagrangian, the energy terms, into the Lagrangian for this system, then what you get is I theta double dot t plus mgl sine theta equals whatever my generalized forces are, which I've been calling Q. And for today's purposes, let's assume that Q, there's two generalized torques that I care about. I want to model a damping torque, because most pendula have some damping, and I want to model a control input torque. OK? So I'm going to worry about the case where Q is of the form negative B theta dot plus some control input u of t. The damping doesn't come out of Lagrange. You think of that as an external input. OK. So all together, b theta dot plus mgl sine theta equals u. OK? All right. So this is a one-dimensional, second-order differential equation. What would it mean to solve this differential equation? To really solve this differential equation, what that would mean is that if I gave you theta times 0 and theta dotted times 0, the initial conditions. Right? And I gave you some control input over time, then I'd like you to be able to tell me theta of t and theta dot of t. Right? That would be a satisfying solution to the differential equation, if we could have that, and that's the standard way to think about solving the differential equation. It turns out for the pendulum, if what you care about is the long-term dynamics of the pendulum, that's actually not a very practical way to think about the pendulum. It turns out if you just try to integrate this enclosed form, there's no solution in terms of elementary functions. In fact, the integral of these sine terms comes up enough that people created a different type of function which are sort of elementary functions. They're called elliptic integrals of the first kind, and long story short, there's not a lot of insight to be gained by actually integrating, in just a pure calculus sense, these equations. It'll give you an elliptic function that you could pop into Matlab and make a plot, but it's not going to give you a lot of insight. And actually in the notes, for completeness, I did give you the elliptic integral form, but I won't trouble you with that on the board here. OK. So maybe there's another way. If I care about what this pendulum's going to do in the long term, if I care about where theta is going to be as time goes to infinity, then there are a bunch of other techniques I can use. OK? And the ones that I'm going to use today are graphical solution techniques. And it's actually the best reference for that is this book by Steve Strogatz called Nonlinear Dynamics and Chaos. Has anybody seen that book? It's a great book, very, very readable book, just brilliant. So it's a Nonlinear Dynamics and Chaos by Steve Strogatz. He's at Cornell. OK. So let's think about how we could possibly solve that system graphically, and let me start by solving a slightly simpler problem. Instead of making u a function of time, let's make a constant torque. OK? And I'm going to look at a special case, where the system has very heavy damping, just to get started. Let's think about a special case, a very heavily damped pendulum with constant torque. OK? OK. So what do I mean by that? So in this equation, the heavily damped, what I care about is that the viscous forces do the damping are significant compared to the inertial forces of the pendulum. If you're in fluid's, it feels like a Reynolds number argument. This would be equivalent to having a very low Reynolds number system. OK? But what I care about for this argument, I want to say that b over I is much, much greater than 1. Right? And I'm going to say that u of t is just some nominal, some constant u0. OK? AUDIENCE: [INAUDIBLE] PROFESSOR: It's not a dimensionless quantity. Right? So if you want a dimensionless Reynolds number, an analogy would be you'd need a square root of g over l or some time constant on the bottom, but this is a number with units that's greater than 1. Good catch. OK. So why is this the relevant thing? So now, if I look at the same equation, if I do u0 minus mgl sine theta equals I theta double dot plus be theta dot. If b is dramatically bigger than I, then this right-hand side looks about like just b theta dot. These terms swap these terms. Then, I'm going to make an approximation of this right-hand side with just b theta dot. Reasonable. OK. So the reason I'm thinking about this heavily-damped pendulum example is because it changes our second-order system into our first-order system. OK? It'll just be a way to start. And that's a general thing. At a very low Reynolds number, you can start thinking of things as being mostly first order also. OK. So now, I've got this simpler equation. I want to make one more simplification, actually, for a minute. I'm going to just forget about the fact that theta wraps around on top of each other. OK? So just let's ignore wrapping. It's not a big deal, but let's just keep it clean. And to be very explicit about that, I'm going to replace theta with x. Just remember that we've ignored wrapping. So my equations now are bx dot-- excellent-- is u0 minus mgl sine x. Thanks, guys. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. OK. So we have a pendulum, but it's got a boot. So it's amazing that clocks work so well. OK. Simple, first-order equation, it's a nonlinear equation. So how do I understand the long-term behavior of that system? OK. Well, Strogatz says, if you've got a one-dimensional system, first-order, then you can think of that like a flow on a line. So let me tell you what. So one d, first-order, we're going to do it a flow on a line. OK? So what I want to plot here, I'll plot it really big. I'm going to plot theta over here, x over here. We're in x-coordinates here, x, and I want to plot x dot over here. OK? So this is just a simple function, x dot as a function of x. What does it look like? Well, it looks like negative sine of x possibly shifted up or down a little bit. Right? Let's say, let me draw the no-torque input case first. Then, it just looks like x dot is negative sine of x, so something like this. OK? Where the height of that is mgl over b. Right? OK? So now, can you tell me quickly where the fixed points of the system Are AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. So any time x dot equals 0, we have a fixed point of the system, and that's really the first dynamic concept I care about here is when, in this case, x dot equals 0. OK? And in this case, it's not too hard to solve for the 0s of that equation anyways, but graphically, it's blatantly obvious that you get a fixed point here, a fixed point here, a fixed point here, and of course, every 2 pi, it'll repeat. Right? Every pi it'll repeat. Right? Pretty simple. OK. But now, let's think about the stability of those fixed points and not just in a local sense, but let's really think about the stability of those fixed points. Is this fixed point stable? Yes. OK. How can you see graphically that it's stable? AUDIENCE: The slope is negative. PROFESSOR: So locally, the slope tells me exactly that if the slope is negative, then it's got to be stable. But even in a more global nonlinear thinking about it sense, anywhere that this curve is above the line, that means I have a flow going to the right. Right? So everywhere in this regime, I know that the system is moving that way. Right? Everywhere in this regime, I know the flow's going to this way, and so on and so forth. OK? So even without any local analysis, it's crystal clear that if I start the system somewhere over here, some amount of time later it's going to be there. Right? So I'm going to use a filled in circle to describe that stable fixed point, and this one is going to be stable also. And then is this fixed point stable or unstable? AUDIENCE: Unstable. PROFESSOR: Unstable, right? Nearby points are going to leave that fixed point and go somewhere else. OK. But stability is such a central concept in robotics and in this class that I want to be a little careful about it. There's multiple forms of stability that we care about. Typically, we talk about even local stability. The first definition we care about is a fixed point can be locally stable in the sense of Lyapunov, which is often shorthand isl. A fixed point can be locally, asymptotically stable, and a fixed point can be locally, exponentially stable. OK. Who knows what it means to be stable in the sense of Lyapunov? Anybody have an intuitive understanding of what that means? AUDIENCE: We start within a certain distance of that point. Well, it's kind of founded the more we go farther away. PROFESSOR: Perfect. Yeah. So typically, I have to define some sort of distance metric, let's say just some Euclidean distance. What I want to say is that, if I start with my initial conditions are near some point, then they're not going to go away from that point. And specifically, the way that the sense of Lyapunov is written, it says if I want to guarantee that for all time I am within this distance, say epsilon distance of the fixed point, then you need to be able to pick some delta, some small delta, for which if I start the system inside the delta-- delta is going to have to be less than the epsilon-- then it'll always, for all time, it'll stay inside the epsilon ball. I'm going to write it down. OK. A fixed point, let's say that a fixed point x star is stable in the sense of Lyapunov if for all epsilon there exists a delta for which if x of 0 minus x star in some norm, let's say a Euclidean distance or something like that, is less than delta, then for all t, x of t minus x star is less than epsilon. Does that make sense? OK. So we've got a simple pendulum plot that tells us something about stability here. Is this fixed point stable in the sense of Lyapunov? Yeah. Right? It's stable in the sense of Lyapunov. Let's say you tell me that for all time I want this thing to be within this epsilon distance. Right? Then you can pick anything, any delta smaller than that epsilon, and I know that it's going to stay inside that ball. Right? So in fact in this one, you could choose delta as epsilon, and it would be fine. OK? So these flows on a line are certainly sufficient for checking stability in the sense of Lyapunov. People OK with that? Good. OK. What about asymptotically stable? What does it mean intuitively to be asymptotically stable? AUDIENCE: [INAUDIBLE] PROFESSOR: Good. So a system is asymptotically stable, if as t goes to infinity x is actually going to be at the fixed point. If you start in a neighborhood, then as time goes to infinity, x is actually going to get to the point. So if x0 equals x star plus some epsilon-- I'm saying epsilon and delta meaning things that are small, because these are, when I talk about local stability, I mean these small things. If x0 starts a small distance away from the fixed point, then x at infinity equals a fixed point. OK. So can we tell from our plots that this thing is asymptotically stable? What's that? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. I think you can. I think that this system we know is going to go to this, as time goes to infinity. I think that's quite OK. Asymptotic stability is considered a stricter form of stability than stability in the sense of Lyapunov. Right? OK. What about exponential stability? Exponential stability means not just that I'm going to get there, but I'm going to get there at some rate, at some exponential rate. So if x0 is x star plus some epsilon, that implies, exponential stability implies, that x of t minus x star is less than some exponential for C alpha greater than 0. OK. Then, I'm going to get there in exponential fashion, at least as fast as an exponential. So can you tell exponential stability in this? The point of these methods is not to talk about the rate of something converging. So I think the first answer is not really. But if you think about it, if you were to draw some line, if something was a constant slope here, then that system would converge exponentially fast. So I think as long as your curve is bounded by-- is above some line, then that would satisfy the time constant criteria. OK? But we're going to use those different definitions throughout the class, so I want to make sure that they're clear. OK. So we said fixed points. We talked about a little bit about local stability. Let's talk about another important concept which is the basins of attraction. OK? So for some fixed point x star, some stable fixed point x star, I want to know, if I ask what the basin of attraction is, that means it's the set of initial conditions, which will get me to this fixed point. Right? It's the bounded region of initial conditions, then set of initial conditions for which x of t as t goes to infinity equals x star. So what's the basin of attraction of that fixed point? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. Good. Right? OK? So this entire region here, not including those points, but this entire region here is the basin of attraction of that fixed point, and these borders here, these lines which separate the basins of attraction, they're called the separatrix. Right? Does it look like it's working? AUDIENCE: It will. PROFESSOR: OK. OK. So let's just think about this for a second here. So I've got an overdamped pendulum. OK? This is the fixed point at 0. My coordinate system is sets of 0 is the bottom. Right? I don't have to use my arm. I've got a pendulum right here, and even if it's off-- can I move it? Yeah. OK. So this is state equals 0. We just said that, if the system's overdamped, then we've got a stable fixed point at the bottom. I think we can all believe that. If it was overdamped, it would just go like this. Right? This is an underdamped system, but the first-order dynamics will take it to this stable fixed point. OK? The separatrix of that stable fixed point are the unstable fixed points, up here. Right? So an overdamped pendulum, if it's right here, will come to rest here. Right? If it's right here, it'll come to rest on the other side. Right? That's the basin of attraction. That's the separatrix. I think it makes total sense. OK. What happens now, if we start adding control torque to this overdamped pendulum? It's just this constant control torque, what happens? AUDIENCE: [INAUDIBLE] up or down. PROFESSOR: Good. Yeah. So remember, I'm just working of this equation here. That's going to move that whole line up or down. All right? So what's that going to do to the fixed points? What happens if I do u0 equals mgl over 2b? You see where I'm going with that? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. It might be that simple. It could be that. I didn't think it out that far. OK. So if u0 not equals mgl over 2b, then this curve is going to be up. Right? So it's going to be some sine wave like this. OK. And the fixed points are going to move together. Right? So I've got fixed points like this, fixed points like this. AUDIENCE: [INAUDIBLE] PROFESSOR: Why do you say that? AUDIENCE: Well, just because it gets divided by d. PROFESSOR: Oh, it does get-- you're right. Good call. Yep. Thank you. Just mgl over 2. Good. Yep. OK. So the fixed point start moving together. Do you believe that? Do you believe that in the physical interpretation of the pendulum? This one's still going to be stable. We could see that quickly. This one's going to be unstable. Right? So if I apply a constant torque, which I will do as soon as Zack gives me a green light, but if I apply constant torque, positive torque, it's going to start moving the fixed point like this. OK? The unstable fixed point is also going to move. Right? It's going to be coming down like this, and the basins of attraction changed. The separatrix moved. So if this system's here, it's actually going to go around to this and likewise. OK? It's nice you can see that so easily from these little plots. OK, and what happens if I put in u is 2mgl? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. Exactly. Right? I won't do that, because I might hurt Zack. But as soon as this, at the critical point where this whole curve is above the line, then the thing's just going to move this way forever. OK. So just thinking about these flows on a line, you could start seeing what first-order, single, one-dimensional systems can do. So you know they can go to a fixed point. We just saw they can go to infinity, if u0 is 2mgl. Can they ever oscillate? I don't see how they could. Right? I said, it's either going this way, or it's going this way. There's no oscillations in a first-order system, and the mechanical engineers know that, but this is a graphical way to see what that happens, what that means. So actually, it turns out the only thing that a first-order, one-dimensional system could do is end up at a fixed point or blow up. Right? There can be a lot of fixed points. Right? It could be a flat line. It could be that it could be stable anywhere. That's fine, but it'll always either end up at a fixed point, or it'll blow up. OK? All right. So it's a general tool. It's certainly good for things other than pendula. Let me just give one other nonlinear system example that's one dimensional and first order, so we can think about a few more terms. OK. So this one is called, just another example, this one's actually called a nonlinear autapse. Anybody have any guess what the heck that means? Even a crazy guess? It's actually a model of a neuron. OK? I did my PhD with the neuroscientists, so I often think about things that are like neurons. OK? If you've ever seen neural networks, dynamic neural networks, a pretty common representation of a neural network is with one of these sigmoid functions that are weighted by some parameter w, let's say, a weight parameter, but that's inconsequential. All you have to care about here is that I've got a first-order, nonlinear system with a parameter w. OK? And again graphically, we can tell you everything you need to know about the system pretty quickly. OK? So who knows what it tanh looks like? I just said it's a sigmoid. Right? So if you know a lot about a tanh, so a tanh, it goes from 1 to negative 1. This is x. This is tanh of wx. For w equals 1, it turns out you have a slope of 1 here, and you go up, and you asymptote like this. OK? That's w equals 1. For w a lot greater than 1, you're even steeper, but you get to the same place. So let's say that's w equals 3. And then if you're less than 1, it's going to be even more shallower. This is why bring sidewalk chalk to class. OK? So let's say that's w equals 0.5. OK. So now, what is the system x dot equals negative x plus tanh look like? If I want to actually draw my flow on the line, what's that going to look like? If I want to plot x versus x dot here, OK, well, I could plot both of them independently. So I know how x dot equals negative x looks. I know how tanh looks. That function is just going to be this thing put on the line x dot equals negative x. So what that means is for w equals 1, I have a system that comes in like this and goes like that. For w equals 3, I have a system that goes like this, and for w equals 1/2, I have a system that goes like this. OK? Does that make sense? So the reason I chose this system is I want to tell you quickly about bifurcations and how to make bifurcation diagrams. OK? So where are the fixed points of this system? AUDIENCE: [INAUDIBLE] PROFESSOR: Good. So I definitely have a fixed point here. Is it stable or unstable? AUDIENCE: Unstable? PROFESSOR: It depends on w though. Right? In one case, it's unstable, and in one case, it's stable. OK? And then in some cases, in the blue case, I have fixed points here, and in the red case, I don't. Right? So this is a system which, as I change my run parameter w, I change the number of fixed points, and I change the stability of those fixed points. OK? It's one of the simpler systems where you see that. So a change in the number of fixed points, as you vary parameter is called the bifurcation. OK? And you can make bifurcation diagrams, which for a system like this, the x-axis is the parameter you're changing, and the y-axis is the fixed point. OK? So if w is less than 1, what did we say? We've got a fixed point at the origin, and is it stable or unstable? AUDIENCE: Stable? PROFESSOR: Stable. OK? So there's a critical point here, where w equals 1. We know that now, because that's where the slope of tanh is 1. And if it's less than 1, it turns out for all w less than 1, I have a stable fixed point at the origin. So I use a solid line to say a stable fixed point and a dashed line for an unstable fixed point. OK? And then for w greater than 1, what do I have? I've got three fixed points. Which ones are stable? Which one's unstable? I just used my plurals in a way that could only imply one solution. AUDIENCE: The middle one is not as stable. PROFESSOR: The middle one is not stable, is unstable, and the outside ones are stable. OK? So and it turns out, if you vary w smoothly, then you get this. OK? Where this goes to 1, something like 1. It's not quite 1. It's around 1. It's whatever 1 plus the tanh of 1 is. OK? It asymptotes like this, and this fixed point in the middle remains, but it becomes unstable. OK? So bifurcations are a critical concept in nonlinear dynamics. Give us a crash course. This is actually called a pitchforked bifurcation for obvious reasons. Right? And that's actually a pretty common one. You'll run into many others. There's saddle bifurcations, and there's also, I think there's just strangely named ones. I think there's a blue sky bifurcation. Pretty much any name you look for, you can find a bifurcation named after it. OK. So good. I think we know a lot of what there is to know about first-order, nonlinear, one-dimensional systems. OK? I think in a lot of classes, we're trained to think linear systems, linear systems, linear systems. I can do everything in linear. It turns out, you can do everything in a nonlinear system too, if it's first-order one-dimensional. But that's an important axis that we don't see too much, I think, and it helps to know what all these concepts are. I think Zack says can now do-- can we do the overdamped. ZACK: Sure. How overdamped do you want it? PROFESSOR: We wanted gravity to be at 0.8. ZACK: OK. PROFESSOR: And we wanted to damping to be, I think-- I'm sorry. Damping is negative 8, and gravity was positive 0.85. ZACK: OK. PROFESSOR: OK. So what do we have here? We've got a big motor, a little pendulum. Yeah? ZACK: Could you move it into [INAUDIBLE]?? PROFESSOR: Can I move it? Yeah? ZACK: As long as we don't-- PROFESSOR: Don't pull the power. Right? Good. That's fine. OK. Big motor, DC motor. There's a gearbox here, but we're going to be commanding current which is just like applying that torque there, modulo some errors in the gearbox, and just an otherwise passive pendulum. OK? So Zack has written the basic system identification, so we know what the mass is, what the damping is. It's not quite the simple damping I showed you, but it's not too much worse. And now, he can do things like cancel out. He can change the damping. He can remove the damping. He can add more damping. He can change gravity. Its just the feedback linearization game we said yesterday. All right. So we just said we're going to make it an overdamped system here. So there you go. Now, overdamped is actually the hardest one I'm going to show today from the control plane of view, because you get chatter like crazy when you-- [BUZZING] See? Because there's an encoder that's discreet, and we're sampling it. OK. So that's an overdamp. Now, can you give me a little bit of a constant torque? ZACK: Sure. PROFESSOR: Like 0.1. ZACK: Yeah. PROFESSOR: I changed it to the gain. ZACK: I know. I'm trying to figure out where that-- PROFESSOR: It's-- ZACK: There. PROFESSOR: I think it's probably that torque. Yeah. ZACK: Yeah. OK. How much do you want? PROFESSOR: 0.1. ZACK: OK. PROFESSOR: Yeah. OK. So I applied 0.1 of torque. Actually, we've got a sign error compared to my things, but that's OK. Now, I've got a fixed point here. Right? So the same overdamped pendulum, it's stable here. There's a little bit of stiction in here, so it's not going exactly. OK. The other place we feel it is right up there. That's the other fixed point. Right? If I put it over here, then that constant torque moves me right over there, just like I said. OK? It's all good. Let's just-- we can play with it a little bit now. So give me maybe twice the gravity or something like that. ZACK: OK. PROFESSOR: Right? What's going to happen if I double gravity? AUDIENCE: Nothing. PROFESSOR: What's that? AUDIENCE: Nothing. PROFESSOR: Nothing. ZACK: Let me turn this damping gain back down. PROFESSOR: Yeah. Good idea. OK? ZACK: OK, and now we want twice gravity? OK. PROFESSOR: OK. Changes the natural frequency, right? We're going to see that in a second. I didn't actually do all this secondary stuff yet. Still got high damping in there though. ZACK: Yeah. Oh. Yeah. PROFESSOR: OK. ZACK: Take that out. PROFESSOR: Cool. So we're going to play with that again, when I do the second-order version here. But at least I hope you believe wherever it went the constant torque overdamped tells me everything I need to know about the simple pendulum, so that's kind of cool. OK. Let's get rid of this overdamped constraint which is the only reason it was first order, and let's get to the second-order case. OK? But before we do the whole dynamics, we'll make another quick assumption. Let's do a different special case. I could have left that, I guess. Let's do an undamped pendulum, and we'll start with 0 torque. OK. So b equals 0, and u0 equals 0. OK? All right. So what do those equations look like? Now, I've just got I theta double dot is negative mgl sine theta. Right? OK. So how am I going to graphically investigate this second-order system? Well now, there's two things I care about evolving over time. Right? I need to know what theta does over time, but I also need to know what theta dot does over time. OK? So I'm going to need a two-dimensional plot, and this is the phase plot. OK? So let's make a phase plot. OK. So a phase plot, what I'm going to plot is theta versus theta dot, and what I'm going to plot is not-- my separatrix doesn't want to go away. What I'm going to plot on this, it's a vector plot. OK? I'm going to plot I have two equations floating. Right? This is the second-order system is equivalent to two equations. One is theta dot, looks a little silly to write this, but you can think of a second-order system as coupled first-order systems of two variables here, and this is mgl sine theta. OK? So what I'm going to plot here is a vector which is theta dot versus theta double dot as a function of theta and theta dot. I see angry looks. Right? I can write this as, if I want to think of this as a first-order system, I'm going to say that x is theta, theta dot. Right? And now, I can write x dot is some function of x, a first-order equation which describes this second-order system. OK? It's a vector equation, and what I want to plot is, for all x, I want to plot x dot. x is 2 by 1, and it happens to be theta dot, theta double dot. And it's a function of a 2 by 1, so I'm going to make a vector plot on this two-dimensional system. OK? Maybe as I start drawing things, it'll become crystal clear. OK. So given this equation here for the undamped pendulum, let's plot some of the vectors. OK? So it turns out that it's simple to think about it along the line of theta dot equals 0. Let's think about that. I have a vector who's y component is going to be 0, and its x component-- sorry. This component is going to be 0. I should call it the x component, and its y component is going to be negative mgl sine theta. OK? So at 0, I've got nothing. Here, I've got a little vector going down. Its y component is this. It gets back to another 0. So if I plot this vector field along that line, I get this. Right? OK? If I plot it up at some positive-- or let's even plot it now along the other line. So if theta is 0, then I get a thing that's only got an x component. This term is 0, and it's actually a linear thing, so it just looks like this. You with me on that? OK? And if I plot something in between some positive x, some positive theta, positive theta dot, then I'm going to get a combination of these two things. I'm going to get a vector like this. Right? If you plot that through, or if you hand it to Matlab to plot it through, for instance, then you can again graphically quickly interrogate the nonlinear dynamics of the system. OK? So in this phase plot, if I start with some positive velocity and 0 angle, then I'm going to get some angle. Right? I'm going to go around until I get to the theta dot equals 0. I go around. That could have been more circular, but you get the idea. OK. It turns out in here, things really do look like circles around the origin, and they should be cocentric. Out here, the nonlinearity shows up a little bit more. You get these eyeball looking things. OK? All right. So what does that say? So if I start my pendulum-- is it-- ZACK: It's ready. PROFESSOR: OK. If I start my pendulum with 0 position and some velocity, is it 0 damping [INAUDIBLE] good man. OK. Then, what's it going to do? It's going to start oscillating forever. Right? As close as we can to canceling out damping by measuring it and subtracting it. OK? So it's just going around. It's got some positive theta, negative theta dot, positive theta dot, negative theta dot, and it was pretty close. I'll give you a better chance by going like that. OK? And I can really test our model by starting it up here. That's pretty good. Right? So if I start way up here, it's going to take these orbits. OK? And it turns out, if I were to wrap the pendulum around once-- again, now I'm testing the encoder counts. ZACK: [INAUDIBLE] PROFESSOR: OK. Well then, I would do the same thing but over by that other fixed point. Right? So this whole pattern repeats over here. Right? ZACK: [INAUDIBLE] PROFESSOR: Yep. ZACK: [INAUDIBLE] PROFESSOR: Oh, great. We should plug that in which is going to be mechanically impossible. OK. ZACK: There we go. PROFESSOR: You can sort of see there's an eyeball there. Right? This is the real data from the encoders. OK? So I moved it around there, and then I jerked it over here. I got an orbit there, moved it over here, and got that nonlinear orbit. OK? That's the exact same phase plot that we just did. OK. So now, where are the fixed points of the system? On a phase plot, two variables, where can the fixed points even be? Can I have a fixed point up here? If I have a velocity, I don't have a fixed point. So right away, you know you're only going to be looking for fixed points on the x-axis here. Right? And again, on the x-axis, it just reduced to the sine. So I've got a fixed point here. I've got a fixed point at pi. I've got a fixed point at 2 pi. Right? OK. Are they stable? Is this one stable? You should ask, what do I mean? One more sense. Good. Is it asymptotically stable? AUDIENCE: No. PROFESSOR: No. Right? If I start it here, it's just going to go around and around and around forever. It's not going to get to the point. Is it stable in the sense of Lyapunov? Yes. Good. And this one is not stable at all. Right? OK. Cool. All right. Let's add a little damping in. What happens if I add my damping back in but leave my torque off? What do I get then? I want to ask you one more question. So this thing is a trajectory. I didn't tell you carefully what-- this is a closed orbit. Right? Is that closed orbit stable? AUDIENCE: What do you mean? PROFESSOR: That's a fair question. What do I mean? So we have to define orbital stability, and we will. But just intuitively, what do you feel about-- do you think that trajectories that are near that are going to get to that? No, right? This is the same sort of marginal stability case that you see here. So if there was a sense of Lyapunov sort of definition for the orbit, then we might say that. It's not a limit cycle stability we're going to talk about, where it's not going to converge to that trajectory. OK? If I'm on this orbit, and I give it a little push, then it'll start moving in this different orbit. OK? OK. Now, what happens if I do the damping case? So my equations are-- I think it's just unsatisfying to see that. Let me just write x dot is still going to be theta dot out here, but now it's going to be negative b theta dot minus mgl sine theta. Right? It turns out, if I make that same plot, at the origin all is well. It's the same thing. Right? I still get my sine wave dynamics at the origin. OK, but what happens up here? Now, this is still theta versus theta dot phase plot, but now when I have 0 theta, I have the same x component, theta dot, but now I have some negative y component. Yeah? So I'm going to get a vector that looks like this, and these things go down like that. OK? And in general, it looks something like this, and what are my trajectories going to look like? AUDIENCE: Spirals. PROFESSOR: Spirals, good. Right? You might want to restart it. I might have moved it too quick. It's got to grab the encoder 0. ZACK: OK. PROFESSOR: OK. That's no damping still, right? ZACK: Yep. PROFESSOR: Let's put some damping in there. ZACK: No, wait. That's with normal damping. PROFESSOR: Normal damping. ZACK: Yeah, not doing anything to it. PROFESSOR: OK, so this is normal damping, just the damping from the motor, the friction and the gearbox mostly probably. OK. Let's see a phase plot. ZACK: OK. Let me get the data off it. PROFESSOR: Yep. ZACK: There we go. PROFESSOR: OK. So you'll see a few blips in the plot. That's when the encoders are slipping away, but you can see a pretty spirally trajectory. I think if we triple the damping or something, it'll look more compelling. ZACK: Yeah. PROFESSOR: Let's try that. Yeah? ZACK: OK. PROFESSOR: OK. So while he's setting that up, what happens to the fixed point over here? It's going to have some sort of dynamics over here. OK? Right? It's going to have spiral dynamics down here, but what happens if I start with a really large velocity or a really large negative velocity, let's say, and 0. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. Right? So this is my unstable equilibrium. If I'm coming down, and I don't make it to that unstable equilibrium, then it will actually tip back up, and I'll go in and spiral to this fixed point. Right? So let's see how-- ZACK: You going to give that a try? Let's do just the normal-- ZACK: OK. It's ready to go. PROFESSOR: OK. OK. That was the benign case. We'll try the high energy case in just a second here. ZACK: Let me get that data [INAUDIBLE].. PROFESSOR: Yeah. ZACK: There we go. PROFESSOR: OK. Minus the little blips, you can see that's the spiral going in. Now, let's see if I can-- ZACK: Let me restart the-- PROFESSOR: Sorry. Let me leave it. ZACK: OK. PROFESSOR: OK? Yep. So it looks like it hit the brakes here and came back down. You might think it's like a discontinuity in the trajectory or something, but hopefully, it'll look exactly like what I pictured up on the board. Yep, again, minus the encoders slipping, and this is me lifting it. It goes around and finds, stabilizes into that fixed point. OK? All right. So finally now, for those of you that are quite familiar with dynamics, thank you for listening. Let's think about controlling this system. OK? What does it mean to control this system? Let's say, I put in u again. Forget about constant u. We're past that now. Let's say, in general, I can make u be a function of the theta and theta dot. Right? And my equations now are going to have this plus u which could in general be a function of theta and theta dot. OK? What is it going to do to my phase plot? Doesn't change this component, right? Which means things are basically still going to go around. Things always go around this way. That's how it works. OK? But what I can do is I can move this guy up or down. Right? That's all I can do. Now, in a lot of cases, that's everything I'd want to do, so maybe we should. You want to do the feedback linearization of gravity example? ZACK: Yeah. PROFESSOR: OK. ZACK: Give me a couple seconds. PROFESSOR: Sure. Right? So it turns out, that's enough, if you think about what this plot looks like, even if I have a damped thing. So for instance, if I do my feedback linearization, and I make this function. Let me be a little bit more careful, I'll call this pi. I'll say u is pi of theta, theta dot. That's the notation we use most of the time here. Let's say, I just made it b theta dot. That cancels out this component the thing, flattens it out, and gets me back to that plot. That's actually exactly how we did the 0 damping case there. Right? We just canceled out the damping to make that plot, because the real thing has damping. OK? And if we do the feedback linearization, we can actually do plus 2mgl sine theta. Right? If I make the controller look like that then, lo and behold, the system's now an upside-down pendulum. Right? Same thing I showed you last time. This time it's on a piece of metal, so that's more impressive. Right? OK? OK. So here's the name of the game, and I want you to think about this between now and next week, let's say. Let's say, I want to put a fixed point. You can only put fixed points along here. Let's say, I want to stabilize, turn the system into a system that's stable at this fixed point, unstable here. I want all trajectories to end here, and I want to do it by making minimal changes to that vector, by adding minimal torque. OK? So you get this geometric phase plot view of the world now. What would you do to those vectors to try to get all trajectories to get there with minimal torque? Or another nice version of the problem is let's say I have a bounded torque. Let's say, I don't care about being minimal, but let's say I just have a motor that can only put out so many newton meters. Right? Let's say, that means there's a limit to how much I can move those vectors. OK? How do I shape those vectors in order to guide all system trajectories where I want to go? OK? It's when those constraints start coming into play that you can't just change these vectors to be whatever you want. You have to think about pushing them and pulling them. Right? And that's the under-actuated robotics case, where you're thinking about moving your dynamics around instead of squashing them. OK? And for the computer scientists out there, I'd actually love to see what you come up with. Think about-- write a program that could try to in some minimal way stabilize that fixed point on the vector field. I think there's a lot of ways to do it. It'd be interesting to see what you come up with. OK? That's the name of the game. Turns out, there's the right way to formulate those problems, I think, is we're going to talk about optimal control next week. The most straightforward answer I have for the computer science world is that, if I can describe some function on my vector field that I want to minimize, some cost function, I'll call it g of x. Possibly it depends, I want to penalize actions too, some cost function, and let's say, I care about the long-term cost over some trajectory. We're going to call this thing J, where this is x of t, u of t. If you can define a cost function that describes the thing I just said in this form, then that's going to turn everything from this fuzzy dynamics problem into a strict computational problem, and we can use all our favorite optimization tools to solve it. OK? We're going to hammer that out big time next week. Good. I think you know most of what there is to know about the pendulum. Anybody have any questions? You know fixed points. You know stability in the sense of Lyapunov, asymptotic stability, exponential stability, basins of attraction, separatrix. What else? Closed orbits, you've got it all. OK? That's mostly it. Let me just say a couple of the administrative details that I didn't get to last time. Your first problem set is posted tonight. It's going to be due-- problem sets will be due basically every other week, on Tuesdays. It happens that the first Tuesday that it's going to be due is one of those weird Tuesdays, but just to keep the clock on schedule, we're going to ask for an online submission on that Tuesday. They'll be every two weeks. There's six of them throughout the term. We do have a midterm in the class. It's just before spring break, I think, and so you can enjoy spring break. We don't have a final exam in the class. We're going to do final projects instead. So I'd like you to start thinking soon about final projects, and feel free to ask me. Homeworks, you're fine to work on as a group. I don't care, whatever it takes to learn the material. Everybody should turn in their own problem set. The midterm, you'll work by yourself. The final project, you can team up with somebody, as long as the contributions are clear from each person, and that's totally fine with me. Throughout the class, as you might guess, we're going to have Matlab simulations and physical robots trying to show the basic phenomenon. Many of those will be available for you for your final projects, if you so choose. Maybe I'll ask you to show me that something's stable on a simulation, before we put it on the robot or something like that, but it should be fun. I think it's such a young field that we can definitely do publication quality final projects. So those of you that are in robotics, think about writing an [INAUDIBLE] paper for the final project. I think that's it. I think we've got a, yeah, the PDFs from the lecture one were online sometime yesterday. Today's will be online immediately, and let me know what you think about this [INAUDIBLE] thing. I saw one comment last night, anonymous, but that's good, and I'll see you next week. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_23_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: OK, so your project presentations we drew randomly last time. Phillip, unfortunately, got the short straw. He's presenting today. Don't feel too bad for him. It's because he's going to [INAUDIBLE] next week, to ICRA, right? So the plan is eight minute talks plus two minutes of questions and transitions between speakers. You're free to use your own laptop. Hopefully it'll just plug in and work. You're also free to send me slides so that I can-- I'll put on my laptop, whichever is better for you. If you have movies or something, send them to me a few minutes before so I can make sure they play, but whatever is better for you. I'd like to ask you to try to sort of-- in eight minutes-- I mean, it's actually really hard to give an eight minute talk, right? It's much easier to give an hour and a half lecture. You can just ramble on. But you should try to sort of efficiently cover the things that I-- you know, the most important things to say are roughly, of course, describing the problem you chose, and it's always good to say for a minute why it's important or it's interesting. I'd like you to describe the technical approach you took-- you chose, and why you chose it. And then a quick results, I mean, as much as you can get in, and if you have any sort of interesting implementation details that came up, that reared their heads, that's always fun to say. You're not going to be able to fit much into eight minutes. But tell us what you've done so far. And it's typically pretty fun. You guys are not going to believe how cool the projects you've all picked are. It's going to be a lot of fun. We did-- so we're in this room Tuesday and Thursday. I've asked them in the-- because there's a real chance that we might run a little bit over class, I asked them if we could try to keep the room for the next hour. They actually told me that I could yet. But hopefully we'll be able to stay here till 4:15 or something if we need to to finish. And the report-- I said it before, but the report is-- is it working? AUDIENCE: No, the same thing has-- RUSS TEDRAKE: Why don't you power it down. That's when it worked last time. Yeah. It's due May 21st, I said, right? That's a Thursday. So if you choose the double column-- so I like to think of them as ICRA format, for instance, would be perfect. That would be the robotics conference. But I'm not asking for a novel, just a quick presentation of what you did. And I think you should have the same points as in the presentation, but you'll have a little bit more room to tell about the implementation. AUDIENCE: Is there a monetary penalty for extra pages? RUSS TEDRAKE: No, not at all. That's right. No publishing costs, and you can use color figures as much as you'd like. Any other questions about the project? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: I'm just giving you a ballpark six to eight pages. If you come in at five, I'm not going to give you a huge deduction if you say everything you needed to say. And you're free to-- it at least means there's no publication cost for extra pages, I guess. And six to eight pages double column in ICRA format is more than six to eight pages the other way. So it's a rough guideline. I'm just telling you, you know, roughly-- AUDIENCE: So we can just reformat it [INAUDIBLE] six to eight pages? RUSS TEDRAKE: That's what I'm trying to avoid. [LAUGHTER] Thank you. I'm just trying to give you a sense of the amount of content that should be in there. And you can format it as you would like. OK, excellent, so last official lecture-- so I wanted to, as promised, end my part of the class, at least, on some success stories of how all this stuff works and has worked in the research world over the last 15 years. And so the ones I picked out to talk about were Emilio Frazzoli's helicopter motion planning, which-- so Emilio is upstairs in LIDS. Some of the really primal-- some of the first good work on learning in robots was by Chris Atkeson. And Stefan Schaal was his student at the time, who is now quite established, obviously, at USC. And then I'll tell you a little bit about our perching project depending on how much time we have left, OK? So let me start off maybe with the lights on telling you some of the big ideas from Emilio's planning for the helicopters. This was actually Emilio's thesis work. And they've continued to use-- nobody here's from Emilio's group, are they? No. Anybody know Emilio? Yeah? He's on your committee, OK perfect. That's good. So this work already. OK, correct me if I say anything wrong. OK, so the big challenge that Emilio wanted to address in his thesis was-- well, he does everything very, very rigorously and formally. So he's got many theoretical contributions in his thesis. But the experiment that he wanted to get working was we've got a helicopter, let's say an RC class helicopter. And I want to drive it from point A to point B through potentially a very cluttered environment where the obstacles are even moving in real time. So he really wanted a real time lightweight planning algorithm that was as close to optimal as possible in a minimum time optimality sense but that could run on a-- you know, with 1 megabyte of memory or something on a computer that's on board the helicopter. And what he ended up doing was a beautiful combination of feedback control and randomized motion planning. And I want to tell you some of those details. OK, so the first big idea in the-- if you talk to Emilio about doing motion planning and you say I've got some nonlinear system f of x, u, and I want to find a path from start to the finish, and I've got obstacles, whatever, the standard motion planning problem-- but these obstacles could be moving-- the first thing he would immediately say and he has said to me is never ever search over u's. That's just seems like a bad idea in general. The [INAUDIBLE] that we talked about, the naive implementation is pick a point at random and then try to find the closest point, and then find the u which takes you as close as possible to that. He says, if you're doing that, you're not thinking about feedback stabilization, for instance, and you're working potentially on a harder dynamical system than you need to be working with. So he advocates first designing a class of feedback policies. Let's say it's-- they could be time varying or time invariant, right? And they have maybe some-- they're parameterized, for instance, by some desired point. And these feedback policies if-- ideally you generate them carefully and have some provable stability guarantees about them. And instead, then, of picking in some u, which could potentially be working on a system that's very unstable, open loop unstable, it might require tight integration, all these things. Instead, you should sample over the parameters of your feedback policy. So, for instance-- I'll make it more specific in a second, but for instance, if I take a point here, I find my closest point on the map, and I want to grow towards it, instead of trying a bunch of u's, I say, what if I ran pi 1 with x desired of this? How close-- x desired equals, let's say, this point, how close would it get me? Or let's say I run-- or maybe pi 10 would have been the closest one. I'll pick the closest feedback controller that's parameterized that's going to try to get me close to that point. OK, there's a lot of benefits to doing that. In a very real way, it's talking-- we talked about feedback motion planning as having advantages. You know, if I-- I could come up with a trajectory here in the open loop sense that if I went to stabilize, it was not stabilizable. This avoids that shortcoming by actually searching in the space of feedback laws. And it also has advantages, for instance, that the dynamical system you're simulating is probably stable. So you could take larger steps, integration steps. You don't have to integrate very carefully if you've got a stable system. So Emilio has advocated this sort of trim and maneuver view of the world in motion planning, especially on the helicopters. So each of these feedback laws that he considers at the motion planning time are some carefully designed controller. They're of two types. One of them is a trim controller. A trim controller for a helicopter, for instance, would be the hover, or something that flies forward with a steady speed, something that banks with a steady speed. All of these things are trims where, in the relative frame of the helicopter, it's a fixed point. All right, so-- and Emilio was very careful in his work to point out that you can make heavy use of symmetries and invariants in those dynamics. So, for instance, the horizontal position of the helicopter doesn't really matter. If I know how to go forward from x equals 0 in a trim condition, then I also know how to go forward from x equals 1 in a position, right? If I know how to bank at a certain speed, I can do that from any x location. The location of the helicopter doesn't affect that feedback policy, OK? All right, so if you have a few-- what that allows you to do, essentially, is come up with a handful of trim controllers which do a lot of things. In fact, what I think he does in the end is he comes up with at around 10-- I think 25 was the number. I remember-- I brought his paper just in case. Yeah, so he comes up with 25 trim trajectories, which in his case was level flight, no sideslip, with forward velocities tiled over, just to give you a sense here, 0, 1.25, 2.5, 5, and 10 meters per second and then angular heading rates that are sort of similar from negative 1, negative 0.5, 0, 0.5, and 1 radians per second. OK, so if he knows how to do those 25 things, every combination of those, then he's got a pretty darn good library of trim behaviors that he can use as possible controllers that he could evaluate on his system. Now those are the steady state behaviors. And in order to transition between trims he does these maneuvers, which are finite time stabilized transitions between trim conditions. And those maneuvers are actually pretty cool. I'll show you. So some of them can be-- so, for instance, he learned a maneuver that was the transition from going forward this way to being in the opposite orientation going forward this way. And that maneuver-- I shouldn't say learned-- he solved using a model of the helicopter, right, with dercall or something like that, a trajectory that was actually a snap roll, I think. It kind of went up, and did this thing, and then came back down and went the other way. And I'll show you a video that. I had lunch with Emilio and get his videos. OK, so how does he design the feedback controllers for the trims? He uses value iteration, right? So we're current and relevant, see? So because he's got actually a relatively low dimensional space by the time he exploits these feedback invariants and just worries about the coordinates that matter in these things, he's actually able to tile the space and run value iteration. And the advantage of that, of getting these trims, is that he not only gets the nominal controller, but he gets the cost to go from value iteration. For the maneuvers, he uses some sort of back stepping optimization, and he feedback stabilizes. And he uses as an estimate of the cost to go on the maneuvers just the duration of the time. Everything's a minimum time problem. So he uses the duration of the trajectory as the cost to go of these maneuvers. And now when you go back into the planning problem, the search goes like this. I pick a random point. I start now asking every-- each of the existing points in my graph-- let me give it an initial graph here, just to make it a little bit more interesting. I ask all of the points on my graph not if they're close in the Euclidean sense, but what's the cost to go, what's the value function of trying to go from here to here. Each of them will vote on the cost to go. And he takes-- he starts-- he considers-- he tries to go in order of the minimum cost to the maximum cost. Once he decides to try-- this one has the lowest cost to go. He actually tries to expand that controller in the direction of here and checks whether it's actually feasible to running this pi 1 and with this x desired, whether it can feasibly get here, for instance, without-- if there was an obstacle right here, let's say, then it might be that the motion plan is not feasible to get from there to there. So he throws it out and takes the next lowest cost. And every time he expands a node, he'll pick the next lowest. He'll search in these, try all the policies until he finds one that's feasible and tries to grow to there. The result, actually, when he puts it all together-- you should read the papers, actually. This is a-- it's hard to do justice in such a complicated piece of work. He's really-- he thinks of everything, it feels like, in his thesis. The five minute presentation here doesn't do it justice. But he's got an algorithm that works fast enough that it works in real time even if the obstacles are moving. Because he's computed-- the big idea there is that you used value iteration to compute the cost to go for the non-obstacle case, right? You know how to control the helicopter when there's no obstacles. And you use that as your heuristic, your distance metric, for the case where you're planning with obstacles. And it works fast enough that he can be flying his helicopter around-- he's got simulations of like windows closing, and it goes a different way. Or the window's open, and he goes through this minimum time way. And a lot of times they still accomplish finding a minimum time trajectory from start to the goal, even with dynamic obstacles. It's pretty good, pretty good stuff. Yeah? AUDIENCE: So when you decide to expand [INAUDIBLE] tree based on the value iteration result, and then you simulate it in [INAUDIBLE] came across an obstacle. So the values there, is that actually the q function, or is it the v function? RUSS TEDRAKE: It's a value function, v function-- j. AUDIENCE: OK, so if that actually happens to cross an obstacle, there might be another action getting [INAUDIBLE] from that space. And it wouldn't still have the lowest cost. But because the value iteration, you'd just say the main-- [INAUDIBLE] another state which has the lowest value. RUSS TEDRAKE: It's true. So he's careful to say that when you go to this trims and maneuver-- whenever you're using feedback controllers in here, one of the things, you have to sacrifice, actually, not only optimality, but completeness, actually, is that you can only now be complete if the solution lies in this class of feedback policies. So you're sort of-- he calls it pi complete or something like this. If there's a path from the start to the goal that is executed by these predesigned feedback-- parameterized feedback policies, then you'll find it. But you give up completeness. And potentially give up optimality. Now, I would guess that if it was-- there are cases where he can try multiple things and actually get around the immediate problem you said. But I'm sure-- I would guess also that there's cases where he does say I've just missed that solution. So I'm sure there's some things where you could try-- for instance, maybe pi 1 would have said it's feasible, but there's an obstacle. But then maybe if I ran pi 10 it would actually get me there. Let me see what he gave me. He gave-- so this is views from his small helicopter. AUDIENCE: Does-- this state space has a velocity inside, or [INAUDIBLE]?? RUSS TEDRAKE: The state space absolutely has velocities. It's very much about learning a dynamic controller. Some of these are invariants. The obvious invariants are in position and translation. So the velocities maybe are not one of the invariant-- maybe they're supposedly reasoned about. But yes, it's very much a [INAUDIBLE] dynamic planner. So here's the on board. This is sort of the success story. He's-- an on board video, and you'll see a few of the-- I think this is the longer version that we already had, because you'll see some flips and stuff in the middle. So this is, I think, the-- no he called it a hammerhead. I don't know all my helicopter stunts. This is the maneuver which changed directions pretty quickly, going back and around. So they're doing pretty aggressive things here with, given the model, very strong convergence guarantees. OK, so especially in these vehicles-- they're doing it now with the forklift to do inserting into a pallet, and lift up pallets, and sort of moving around boxes and everything. These real-time motion planning strategies based on clever implementations of the RRTs, they're real. They work. And value iteration is sort of real, and it works. I asked Emilio what he thought about the fact that value iteration doesn't solve the double integrator rate. And he was actually very interested. But it's sort of-- it's good enough. For the purposes of this, it wasn't-- it needn't be a perfectly optimal controller, just a coarse discretization, solve it pretty fast. That was good enough to give you a working controller with some stability guarantees to use in the rest of the planning algorithms. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: It's just dynamically replanning all the time. AUDIENCE: Did he have like all the information [INAUDIBLE]?? RUSS TEDRAKE: So I think that the strongest results in the moving, I think, were in a simulation environment where it was fully known. I don't know if-- I don't honestly know if they had moving obstacles in their physical demos. That'd be cool. OK, good, so evidence one that these things really work, yeah? I can't sort of talk about-- I can't do a whole class talking about learning control without mentioning Chris and Stefan who are really the guys that got a lot of people excited about these things. So let me show you some of their work. Chris apparently also had some helicopter stunts that he did while he was at MIT. And I haven't seen those videos. But I hear they were pretty good too. So helicopters seem to be a popular thing. So this is a case of learning sort of a cart pull problem. This is a pole balancing task with a big arm, a Sarcos arm. The arm has actually been sort of mapped out with inverse kinematics and inverse dynamics control. So really, he's moving the sled. You can think of it almost equivalently as a 2D cart-- and a pole that's not attached. It's not a pin joint. There's vision coming off the side with-- that's why there's a bright pink ball and a bright yellow ball. So there's a vision system watching, doing the state estimation. And they did a lot of work. This was, really, some of the first people that were making real robots use these tools. They were using cue learning, value learning, and they call it model-based reinforcement learning. Because they were trying to distinguish between two versions of the algorithm. One was do system identification on every trial. Between trials, solve your offline-- on that model, solve your reinforcement learning problem. And then put the best controller back on the robot versus doing the trials exactly on the robot. So this was their model-based case. And they argued heavily that in these robots it made a lot more sense to use a model, because even if the model was designed from a very few trajectories, they could learn a good enough model of the robot that the task would be completed in just a few trials. OK, so eight real trials-- and that's the only data they had from the real robot. They had some initial guess at the parameters. But then the system identification really happened each time. They took the tennis ball off. AUDIENCE: Is the range of motion of the arm limited, or is it just losing control when the pole falls? RUSS TEDRAKE: I think it's losing control. I think they're trying to be safely in the middle of the workspace. OK, now the modeling, the system identification was only on the pole. The arm was well known, well characterized. So they just learned a handful of parameters for the pole. But they did it in a few trials. And then they showed that the policy could adjust that quickly. And they became strong advocates for model-based, doing it offline. Now, I would say that in some of the problems we've talked about in class, for instance, the fluid stuff that Jon told you about, that's a clear case-- I think, we think-- that learning the model is not the shortest path to getting success. But on a robot arm, maybe it is. They did that in a lot of different cool tasks. So they did, for instance, double sticking. They had a bunch of different tasks that worked with sort of similar-- AUDIENCE: Is it using vision? RUSS TEDRAKE: The other ones were. I would guess, since there's no bright orange balls and there is a stick, that there's probably some sensor on the other side. But I don't know. I don't remember, actually. AUDIENCE: It's got strings [INAUDIBLE].. RUSS TEDRAKE: Yeah, I don't know how they're doing the sensor. AUDIENCE: I thought that was just to keep it from [INAUDIBLE]. RUSS TEDRAKE: From wiping out, yeah, from hitting the experimenter. OK, so this was really a cool time in robotics, right? So the people, they're coming in, and they're doing all-- making these robots do things they hadn't done. I'm sure cracking a whip or something was on the near horizon. But interestingly-- so I have to tell you that after a bunch of experiments here and pushing it and pushing it, I think it's fair to say that Chris and Stefan decided that reinforcement learning by itself was not the route to super advanced robots. And they switched to working more heavily on imitation learning, which was still done sort of in a reinforcement learning context. But they tried to speed up learning by getting trials from humans. So there's a couple of ways they did that. First of all, they'd try to learn the reward function from the human. Sometimes they'd try to learn the dynamic model from the human. Sometimes they would try to prime a value function from a human, from trials from a human. They'd play all these sort of-- at any place in the reinforcement learning framework where you could expect to try to use information from humans, these are the kind of games that people were playing and continue to play. The new helicopter results by Pieter Abbeel and Andrew Ng are really a success story for imitation learning, because they used human pilots to drive those initial trajectories. So this is the pole balancing with a human. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: They're just watching the dynamics of the pole, but with a controller that works. And they can use that to sort of-- for instance, you could just find the policy that-- [LAUGHTER] That's Chris. This is Auk, who was just working with Stefan. They were having DB play tennis, and do drums, all these things. [LAUGHTER] Now, this particular example-- the other one obviously was not-- but this particular example was more about trajectory learning. And they could execute the trajectory on a fully actuated robot. But these are definitely the success stories in imitation learning. AUDIENCE: He's no terminator, for instance. [LAUGHTER] RUSS TEDRAKE: Oh, you should see the new one. [LAUGHTER] Yeah, it doesn't have teeth. But it could crush through walls. You know, it's this huge hydraulic-- you've seen it, Rick. It's scary looking, right? And the videos they have, they're like-- AUDIENCE: They have one where it's learning to kick box. [LAUGHTER] The researcher was a kick boxer [INAUDIBLE].. RUSS TEDRAKE: So that's the one you've got to watch out for. They're made by Sarcos, all these big hydraulic-- I mean they're powerful tethered-- serious power output robots. Yeah, so the same-- back to the non-imitation, but the learning. Jun Morimoto working with Kenji Doya and with Chris Atkeson did things like reinforcement learning to make this little robot stand up. It's kind of cute. I have to show it. [LAUGHTER] And then after a few trials it would successfully stand up, right? And these are really-- these are the cue learning kind of algorithms I told you about. So there's a lot of good work out there in that line of thinking. Interestingly, I wanted to show, to find the videos. I couldn't find Jan's videos. But I told you Stefan and Chris had started going towards imitation learning and away from pure RL in some ways. But Stefan, I think, is back. I don't know about Chris. Chris is still on the fence maybe. But Stefan and Jan Peters are the ones that did this natural actor critic algorithm which I just wrote the reference on the board on Tuesday. And they-- now there's a new wave of videos of a batting-- from a few years ago, you know it's playing tee ball or something. It's hitting a bat to try to hit a ball off. And they've got a-- Jan's got a wham arm doing a ball and cup task like this, and maybe paddle ball or these kind of tasks. And it's the next generation of these. But it's without the imitation learning. It's the natural actor critic. And they believe that these algorithms have gotten better enough that it's interesting again. OK, so that-- I mean, that's actually-- I told you there's [INAUDIBLE] that run back and forth between pegs to optimize their gaits. There's a handful of stories from robotics of these learning algorithms working well. There's a lot of success stories of the motion planning algorithms working well. But they're sort of still not quite mainstream robotics, I'd say, the learning stuff. If you really wanted to design a controller, we'd still-- there's a handful of people out there that are still working on it. But I obviously believe that's one of the key ingredients going forward. So let me take a minute and show you some of the stuff that we've been doing in a little bit more detail. What you guys saw yesterday still has my yesterday data, yeah? OK, so in our lab, we're-- I told you this in snippets throughout. But just to show you really how we're going at this today in the research, I really believe that fluid dynamics is a great domain where there's tons of good control problems to be solved. And most of the problems that are unsolved are unsolved because we don't have good models that are useful for designing controllers. So this is, I think, a particular domain where I think the machine learning algorithms and the model-free section of the course is really going to help out. You can do system identification, but you can also do reinforcement learning to take your best model-based controller and improve it in a fluid, for instance, setting where-- with just approximate models and model-free control synthesis. There's actually other places. So Elena in our group is now-- she says she wants to do plasmas or something. So there are plenty of other domains where the dynamics are still just ridiculously poorly understood. And you can imagine if you wanted to do control of some high energy something or something where the models aren't good, there's really not a lot of good control work in those domains. You could do some of the first stuff. So we've got two major fluid projects in the lab right now. One is the perching. I showed you the video before. I'll tell you the story now. And I'll tell you a little bit more about our robotic birds. So perching is a hard fluids problem. Because if you go to a high angle of attack with your airfoil then the flow becomes very complicated. It becomes nonlinear and unsteady. And linear control and classical control works well and in these sort of low angle of attack regimes on the left. And that's what fighter jets use today. And when you go to the high angle of attack, the fighter jets aren't doing bad. We actually-- Rick was just showing me today he found the instruction manual for doing a Pugachev's Cobra, which is this ridiculous task they do in airshows. And step one is like turn off your flight control systems, turn off all the warnings or something like that, right? [LAUGHTER] It's like, wow, OK. But then it's just a pretty rote maneuver. You go through a handful of trajectory-- controls. And it does this sort of nose up, nose back down. So it's hard to say that that's a high performance control system just yet. It's just sort of an loop thing the pilots do in air shows. Birds all the time are doing beautifully complex dynamic maneuvers. My favorite now is when they land on a perch, because they go up to incredibly high angle of attack always in order to actually generate that complicated flow behind the wing to generate more drag. So when they get a-- when you get separation on the back of your wing, you get a pocket of low pressure. And you get a pressure drag, that they sort of go up, hit their wings out like this. It's like hitting the air brakes, and they stop dramatically faster. So there's a handful of projects out there doing this. This is the Cornell morphing plane project. It's-- the idea in there is that you actually morph your body up to try to get a lot of drag. But you keep your wings morphed back down so that they have attached flow, and you can do your linear control on that. And then they move the tail out of the way so that the airflow on the tail is again attached. And they can do standard linear control. And John Howe does these prop hang perching trajectories, where he uses a lot of thrust to stabilize his plane to go over and sort of do a helicopter kind of landing on the perch. But these are sort of not the approaches that we would advocate in the class. This is-- my take on that is that a lot of people are trying to perch by making planes that work with old school control. And I'd like to advocate using our newer tools to do nonlinear control on more exciting vehicles, let's say. There's a couple of military vehicles that do these kind of things too. So Woody has been-- and the group have been thinking about how to make a fair comparison between the performance of a bird and a plane. And they point out that the important variables here are the mass, the wing area, the density of the fluid. If you're willing to sort of characterize all these terms, then you can come up with a dimensionless quantity which scores how effectively a plane would land on a-- how quickly a plane, for instance, would decelerate or a bird would decelerate relative to what you'd expect if you were a flat plate and peak drag. That's a fair way to sort of back out the stopping abilities of a small bird versus a big plane or something like this. So it gives you a dimensionless number. Woody's not quite happy with us calling it the perching number. But I still call it the perching number. It means you're impressive if-- if you stop well, it's impressive to be heavy, operate in a low density fluid, have a small wing area, or stop in a short distance. And now these numbers, you have to take with a grain of salt, because it's hard to get these numbers out of the literature. But our first crack at trying to take these numbers from papers and things like that-- a Boeing 747 not trying to stop quickly, this is trying to get you safely to your destination, is-- if you look at as it's decelerating, just before it touches down, it gets a number of 0.08. So it's roughly using-- it's a very small fraction of its available drag. Because it's taking a conservative approach. Now the X31 was trying to stop fast. It's getting a perching number, again, very roughly estimated, of about 0.15 with sort of a 24 degree angle of attack landing. The Cornell perching plane's somewhere around 0.125. That number we sort of believe is accurate, because we know the guys. But they pointed out that they were more worried about, in their perching trajectory, not hitting the ground on the way to the building. So they optimized for something other than perching number, of course, too. AUDIENCE: [INAUDIBLE] from biology? RUSS TEDRAKE: What's that? Good. So the birds are more like this, yeah? Now, that's using a lot of thrust, which we can penalize if we know how to use metabolic-- measure their metabolic cost. And we're trying to get numbers of it without thrust. It's actually hard to get a bird to land without flapping its wings, right? [LAUGHTER] But we're actually working with biologists at the field station hopefully to get some real numbers like that. So what we'd love to have is sort of a plot of a length scale or something over perching number and show that it's invariant. You know, the small, small things that fly like planes get a bad perching number. And big things that fly like birds get a small-- or a high perching number and things like that. And we're working on that. AUDIENCE: [INAUDIBLE] birds that naturally land without flapping. RUSS TEDRAKE: Are there any birds? So the case, I think, where they land without flapping is when they wind hover. So it's not so much about the bird as about if there's enough wind over the perch, then they'll sometimes hover. And then we could try to find that kind of data. And there are people that do-- so Rick had a picture of a parrot with-- a parakeet with an oxygen mask on trying to do energy metabolics on a bird. And, you know, everything's possible, but-- AUDIENCE: There is an s in the denominator. That basically means like that bigger planes [INAUDIBLE]?? RUSS TEDRAKE: Yes, so it's impressive if you stop quickly with a small wing. Right? AUDIENCE: But there is a b zero and b f as well. RUSS TEDRAKE: It's all factored out in order to be sort of-- so the initial case, actually, was actually m over rho s x, but that doesn't handle the case where you don't stop, go to zero speed. So you actually need this extra factor, which is also dimensionless in itself to handle the case where you go between some relative speeds. I think it's-- yep? AUDIENCE: Distance, to me, [INAUDIBLE] usually things that we make are really bigger than birds [INAUDIBLE]. RUSS TEDRAKE: That's why we need to have a plot. And we found that we have a-- in our very sparse data collection, we have some inclination that there are some small planes that have a worse perching number than some big birds and that it's actually-- we have successfully taken out the dimensionality. But this is work that we're still trying to finish. But just for fun, if you wanted your Boeing 747 to get a perching number of 11.7, it means you'd have to go from 450 mile an hour cruise speed to zero in 20 meters. [LAUGHTER] Right, so that's pretty good, right? OK, so why is it a good control problem? So this is all the reasons that we've used in class. But in these domains, the fluid dynamics I think are just prohibitively complicated. Their time varying nonlinear, CFD simulations are often very slow. We're in a very-- we're around 10 to the 5 Reynolds number where CFD craps out. And there's just not a lot of good design accessible models. Design accessible means something I could run any of my control synthesis tools on. AUDIENCE: Can the wings of most planes handle the kind of drag that would be-- RUSS TEDRAKE: So UAVs, I think, can. But I think, yeah, I mean, no question that if we took-- even a fighter jet, probably, would rip its wings off. But a 747 could not physically stop in 20 meters without ripping its wings off, right? So I'm not advocating that we ride in planes with perching numbers of this. But UAVs, I think, could do this-- small UAVs. OK, so even if we did have a perfect model, it's still a hard problem for all the reasons we've talked about in class. We have limited control authority. We have partial observability. We didn't spend a long time on that. But if the dynamics of-- the state space, the state of the fluid matters when the things are unsteady and nonlinear. And we don't have sensors for that. The control forces take time to develop. So it's under-actuated in that sense. There's intermittent control authority from the stalling on the wings. The control surfaces saturate. There's a lot of good reasons why it's still a hard control problem. You have to use the kind of tools we work with. So we did this first by doing all these model-based approaches. We actually-- when we started the project, I didn't expect to have a good model of the glider. We thought this was immediately a case where model-free was going to be the solution. But Rick's initial data collection started coming out beautifully clean data. And we tried to learn an aerodynamic model and started doing model-based control. So in this case, it's actually maybe what Chris and Stefan would have called as model-based reinforcement learning, where you learn a model and then run the best policy. It's in a motion capture environment, these planes. We-- Rick's built a lot of foam planes. Woody now has two. So we've got a small foam plane. It's got markers on here so that the cameras offboard can sense the position, orientation of the plane relative to the perch at about 119 Hertz. The only thing onboard is that servo motor you can just barely see in the middle, which is actuating the elevator in the tail and a battery and a receiver. That's it. Everything else is offboard control. You see that the wings are bent up with some dihedral. That gives it passive role stability, just the center of pressure is above the center of mass. And it's got a big tail. So it's got sort of yaw stability. It's basically a dart. It would never do anything interesting out of the plane, which is the way they designed it. But it's got perfectly interesting and rich longitudinal dynamics. And that allows us to simplify the model to be sort of a planar model, rigid body model like we've been using a class. In fact, in my notes when I get back to posting the most recent chapters, you'll see our simple rigid body model of the plane. So Rick shot the plane like 240 times into the motion capture environment with-- we basically flew into the middle of the room, then kicked its elevator up, and then fell down, or kicked its elevator to a random place, fell down, and collected a lot of data. We built a careful rigid body vehicle model, a careful model the equator. We're making it better and better. Woody's got the next wave of system identification now. And you plot that data over angle of attack, the lift and drag coefficients, which are the standard ways to do the system identification in aircraft. And the first thing you'll notice here is that it's data over very large range of angles of attack, which is kind of cool, because you don't get that in real flight data from real planes. Because pilots don't do that. It's the luxury of working with foam planes and motion capture. And the other cool thing is, at least on our initial plane designs, we've actually broken it-- we built a plane we thought was going to be more ideal, and we now have worse models. But at least in these initial planes, we seem to have gotten very, very good match to some of the textbook flat plate models of quasi-steady flow. Now, the drag coefficient looks a little bit messier. There's a-- it looks like noise there. But actually, we think that's not noise. We think if you watch a time trajectory of this, it's actually a periodic oscillation, which looks like the vortex shedding of the-- it has the same characteristic frequency as the vortices we see in the pictures I'll show you in a minute. OK, so we have a model where the aerodynamics are fit from data. The state space is 7, roughly, sometimes 8, depending on how we model the actuator. But that's just sort of-- I told you when we talked about value iteration, I told you, ah, you could do it in 6 dimensions. If you're crazy, you could do it in 7. So Rick's crazy, right? And he did value iteration on it first, which I think is the right thing to start with. To find a cost function, we said get to the perch as close as you possibly can and designed a nonlinear feedback policy over the state space that required discretizing the entire big state space very coarsely and then doing some optimized dynamic programming on the cluster and on all these things. And he came back with a policy which, when we play it out could nicely do these maneuvers, the post stall, and we say one out of five times it would catch the perch. Now those specifications at the top there, it's entering motion capture at 6 meters per second. The perch is 3 1/2 meters away. Those are designed to try to be a trajectory that you could only acquire the perch if you're willing to go post stall. You could only slow down fast enough if you're post stall. So these are the pictures. We actually now have a wind tunnel downstairs. Rick, can I show everybody your wind tunnel, Rick? This is our wind tunnel downstairs. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Box fans-- so you have to-- so my life changed when I met Jun Zhang who works at NYU. He's one of the best fluid dynamicists you'll ever meet, experimental fluid dynamicist. And you go to his lab. And he has box fans, and like he takes pictures with his digital camera. And he's got, like-- it's the most humble lab I've ever seen. And he ends up with fluid pictures that are on the cover of Science and you name it. There's a real art form to doing things, I don't know if minimally, on the cheap, something like that. But we were very much trying to do these sort of-- we're trying to channel Jun a little bit with box fans and you name it. But we put the-- in that sort of chamber, we emit smoke. This is with still air, before we put the box fans on. You emit smoke from the leading edge of the plane. And you can watch the vortices pull off during a perching maneuver. And we count those vortices. And they're pretty close alignment to what we saw in the [INAUDIBLE] data. This is what-- this is a simpler picture with a better fluid. This is actually moving the plane by hand. But it had the vortices with a lot less smoke. And we were trying to-- tried to capture-- we're trying to get quality like this in pictures like this now. AUDIENCE: Could you just like burn something at the front of the wing? RUSS TEDRAKE: It's titanium tetrachloride. It reacts with the vapor in the air and just-- yeah, and you don't want to breathe it too much, right? Rick says-- and you don't want to leave it open next to your wrench, right? It corroded the wrench like in an hour or something. It was pretty scary. [LAUGHTER] OK, so how are we doing so far in this dimensionless analysis? We've got these planes doing this much. And our glider is getting about a 0.55 in our current best estimates. Again, the pigeon's still whomping us. So we're working on it. So nowadays we're actually trying to do LQR trees on it. So the model-based approach we're using is this, is exactly what I presented on the LQR trees. In fact, we started thinking about LQR trees because we were trying to figure out how to do a better-- the value iteration felt like it was wasting all of its resolution in parts of the state space that were completely irrelevant. And the LQR trees were a way to design trajectories exactly in the relevant parts of state space and nowhere else. so that's where that idea came from. It's not working yet. We've got a-- our model-based LQR stabilization, Woody's best thing is hitting the perch about half the time now. But still, there's sort of this-- the model-based-- when the model's a little bit off the real system, we're still trying to figure out how to handle the differences between the model and the real system. AUDIENCE: [INAUDIBLE] perching a real perch [INAUDIBLE]?? RUSS TEDRAKE: In the simulator, we can hit it from some small initial conditions. It depends on the model. In Ricks' flat plate model, it hits it from a beautiful number of initial conditions. In sort of the more careful system identification, the problem is harder, because the second derivatives are larger, it seems. So we're hitting it-- I mean, it still hits it from the nominal trajectory and hits it from a smaller range of initial conditions. But what we can't do, for instance, is take a simulated trajectory and put it into our model, which is not quite a feasible trajectory of the model. It's close, but we can't stabilize that. We've got a big dynamical system with lots of interesting dynamics. We've got one actuator. And it's just-- it's hard to make it do things. So we're still trying to figure out if LQR trees are the best way for it. We'll see. And Phillip's trying to push LQR trees on the compass [INAUDIBLE]. And Michael's trying to do it on the dog. OK, so now we're trying to do it with flapping wings too. So Rick's thesis is going to be perching with flapping wings. This is his simulation, which I think looks like a Dragon. [LAUGHTER] So were those actually the initial parameters of our bird, or did you pick them arbitrarily? AUDIENCE: They're actually closer to what the glider would do if it was flapping. RUSS TEDRAKE: Really? Yeah. So-- AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: The time-- oh, because it's a whole second. AUDIENCE: A 1-second trajectory played over [INAUDIBLE].. AUDIENCE: Oh, so it's flapping [INAUDIBLE].. RUSS TEDRAKE: Yeah. AUDIENCE: It has the same flapping frequency as the bird. RUSS TEDRAKE: As the ornithopter, yeah. So this previous plot that I went past quickly is-- basically says that if you look at the-- if you do an LQR stabilization of a perching trajectory, given our original model, and you look at the cost to go matrix, and you look at the single-- I'm going to represent that cost to go matrix is the single costliest direction of the s matrix going backwards, yeah? That's a quantifier of how much error you should expect to get if you were to get a gust of wind, let's say, in the worst possible direction during your perch. And it's a function of time, because the s evolves backwards. So if you get a gust in the worst possible direction at your initial condition in the model stabilized, it can reject that nicely. But in the last tenths of a second, a gust of wind is going to be very hard to stabilize, because you have limited control authority, particularly because your airspeed is very small. And therefore your aerodynamic control authority is very small. And even if you put in thrust or thrust vectoring, it's still a big problem. If your airspeed goes to 0, your control authority goes to 0. And that's one of the reasons we think the flapping is such a good idea. That's one of the reasons we think that the birds always flap when they're landing, because they're trying to maintain air speed on the control surfaces. It's actually also a very good way to decelerate. So this is the big bird upstairs in the lab. You're welcome to come see it any time, Zach, who you remember coming in with some robots at the beginning, he's the one who built it. He's an amazing designer. He's also a welder. This is a titanium welded gearbox in the front. It's a 300 watt outrunner motor. He's designed in all the failure points of the bird. So when it inevitably crashes, it's a breakaway beak. The wing spars have breakaway parts. So you bring bucket of parts out. And this thing is-- it flew for the first time last summer. That's your TA throwing it for you. And it worked. And it worked. And then, oh my God, it's going to hit the building! So we turned it off, right? But that was our first successful flight with an autonomous ornithopter that was stabilized just in pitch. All right, so I'll tell you quickly just why we care about birds here. And that'll be a fun note to end on. So basically, we care-- I care about birds not so much about flapping, but just about doing all the great things that birds can do because I think birds do fantastic things. But you have to be a little careful saying that. Because if you look at speed or efficiency sort of in still air steady level flight, then a propeller is very, very efficient. An airplane wing is very highly optimized. So you have to be very careful trying to say that birds are going to be more efficient or something than a plane unless you go to the more interesting flight regimes. So birds are actually incredibly efficient-- this is in the notes. An albatross can fly for hours or days without flapping its wings just by-- even upwind-- just by exploiting gradients in the sheer layer, right? So when the wind is blowing, then there's a lot more beautiful things you can do than a plane isn't doing. So the cost of transport of this guy, the dimensionless quantity, it's 0.1, which is about the same as a 747 flying across the Atlantic, which is cool. But what's cooler is the fact that the cost of transport for this thing seems to be the same when it's sitting on the beach versus flying across the ocean. So it's basically flying across the ocean with almost no control energy. It just locks its wings in, uses the wind power, and off it goes. Butterflies go thousands of kilometers carried by the wind. Falcons can dive really fast. The speed isn't the impressive thing, but the fact that they can do super agile maneuvers during these incredible dives, like catching a sparrow out of the air while it's diving is really impressive. Bats can catch prey on their wings. They've been studying down at Brown, Kenny Breuer's lab, they talk about how these bats can fly through thick rainforest with 1,000 other bats on their back, through caves with stalactites and stalagmites. And they're doing all these crazy things. They've got a video of this bat that can go-- he's going full speed this way. And in five laps, or about-- actually, he said that he does most of the turn in two flaps. And they say in five it's completely done. In basically two flaps, he's able to go full speed again the other way. And the whole maneuver takes just under half a wingspan-- full speed this way, full speed the other way in half a wingspan, wow. And the story goes, and our mission here in my robot locomotion group upstairs is to try to make machines that can do this kind of stuff, birds that are far surpassing the performance of our best engineered systems in a lot of the performance and efficiency, acceleration, maneuverability, but not in steady level flight in still air, but in the interesting fluid cases. And the trick is that they're exploiting unsteady aerodynamics. They're doing control in places where we're not doing good control yet. And it's a lot like, for the robotics community, I like to try to tell them it's a manipulation problem. If you're manipulating a piece of chalk or something, that's relatively easy. You can see the chalk. You know what your fingers are supposed to do. This is manipulating vortices that you can't see. You don't know what's coming. It's a much nicer manipulation, maybe the ultimate manipulation problem. And I like to say that once you start thinking of it as manipulating vortices and doing these sort of clever things, then suddenly you can see why fixed wings aren't as exciting anymore. It's almost like trying to pick up a coffee cup with flippers on or something like this, right? So I don't actually care about flapping per se. And sometimes it matters. But I care about having a really delicate interaction with the fluid. And we're trying to figure out how to do that. So here's the-- did I show this in the beginning? Yeah, good, so you've seen the dead fish. So that's another example of the efficient swimming. And it's the story of our pursuit of these controllers. Awesome, OK, so I'm done. Now it's your turn. And we'll be-- let me know if you have any more questions about the projects. We're going to pass out evaluations next week too through-- we have an online system in EECS for evaluations. I hope you give us all the feedback. And I hope you enjoyed everything. And I'll see you next week. |
MIT_6832_Underactuated_Robotics_Spring_2009 | Lecture_19_MIT_6832_Underactuated_Robotics_Spring_2009.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: OK, so every once in a while, I stop and try to do a little bit of reflection, since we've-- have so many methods flying through this semester that I want to just-- once again, let's say where we've been, where we're going, what we have, what we can do, what we can't do, and why we're going to do something different today-- so just a little reflection here. We've been talking, obviously, about optimal control. There's two major approaches to optimal control that we've focused on-- well, three, I guess. In some cases, we've done analytical optimal control. And I think, by now, you appreciate that, although it was often-- only works in special cases-- linear quadratic regulators and things like that-- the lessons we learned from that help us design better algorithms. And things like LQR can fit right into more complicated non-linear algorithms to make them click, so I think, absolutely, it's essential to understand the things we can do analytically in optimal control-- even though they crap out pretty early in the scale of complexity that we care about. So mostly, that's good for linear systems and even restricted there, linear systems with quadratic costs and things like that. And then we talked about major direction number two was the dynamic programming and value iteration approach. And the big idea there right was that, because we've written our cost functions over time to be additive, the big idea really was that we're going to learn-- we're going to figure out the cost-to-go function-- the value function, the value iteration-- and from there, we can just extract that. That captures all of the long-term reasoning we have to do about the system. From that, we can extract the optimal control decisions. And it's actually very efficient. I hope, by now, you agree with me that it's very efficient, because if you think about it, it's solving for an entire-- solving for the optimal policy for every possible initial condition in times that are comparable to what we're doing for single initial conditions in the loop cases. But it only works in low dimensions, and it has some discretization issues. And then the third major approach-- I called it policy search. And we focused mostly, in the policy search, on loop trajectory optimization. But I tried to make the point early-- and I'm going to make the point again in a lecture or two-- that it's really not restricted to thinking about loop trajectories. So when I first said policy search, I said we could be looking for parameters of a feedback controller of a-- the linear gain matrix of a linear feedback. We could do a lot of things, but we quickly started being specific in our algorithms in trying to optimize some loop tape with direct colocation with shooting. But the ideas really are more general than that, and I'm going to-- we're going to have a lecture soon about how to do these kind of things with function approximation, and do more general feedback controllers. These worked in higher dimensional systems, had local minima-- all the problems you know by now. OK, good-- so for our model systems, we got pretty far with that. In the cases where we knew the model, we assumed that the model was deterministic and sensing was clean-- everything like that. We could make our simulations do pretty much what we wanted with those bag of tricks. Then I threw in the stochastic optimal control case. We said, what happens if the models aren't deterministic? Analytical optimal control-- I didn't really talk about it, but there are still some cases where you can do analytical optimal control. The linear quadratic Gaussian systems are the clear example of that. We said that value iteration for this-- although I was quickly challenged on it, I said, basically, it was no harder to do value iteration stochastic optimization, where now our goal is to minimize some expected value of a long-term cost. Value iteration we basically said and almost no harder to do the case with transition probabilities flying around. And in fact, the barycentric grids that we used in the value duration way back there I told you actually has a more clean interpretation as taking a continuous-- you can think of it as being a continuous deterministic system, and converting it into a discrete state stochastic system. Remember, the interpolation that you do in the barycentric actually takes exactly the same form as some transition probabilities when you're going from-- you've got some grid, and you want to know where you're going to go from simulating forward from this with some action for some dt, you can approximate that as being some fraction here, some fraction here, some fraction here. And it turns out to be exactly equivalent to saying that there's some probability I get here, some probability I get here, some probability I get here, and the like. So value iteration really, in that sense, can solve stochastic problems nicely. The other major approach, the policy search, can still work for stochastic problems. In some cases, you can compute the gradient of the expected value with respect to your parameters analytically, with a [INAUDIBLE] update. In other cases, you can do sampling based, Monte Carlo based estimates, and I'm going to get more into that. We're going to talk more about that, but the takeaway messages, when things get stochastic, both of these methods still work. And they work in slightly different ways, but you can make both of those work And then, last week, John threw a major wrench into things. Sorry. That wasn't supposed to be a statement about John. John made your life better by telling you that some of these statements-- some of these algorithms work even if you don't know the model. And he talked about doing policy search without a model. And the big idea there was that actually fairly simple looking algorithms, which just perturb the-- which try different parameters-- run a trial, try different parameters-- simple sampling algorithms can estimate the same thing we would do with our policy gradient, the gradient of the expected reward with respect to the parameters. Even the simplest thing is let's change my parameters a little bit, see what happened. That gives me a sample from this gradient. And if I do it enough times, I pull enough samples, and I can-- I get a sample of the expected returns. If you think back, that's why we try to stick in stochastic optimal control before we got to that, because John also told you the nice interpretation of these algorithms in the stochastic-- that, even if the plant that you're measuring from is stochastic or if the sensors are-- there's noise, then actually, still, these same sampling algorithms can estimate these gradients for you nicely. I think John also made the point-- and I want to make it again-- you would never use these algorithms if you had a model. They're beautiful, but probably, if you have a model-- maybe, if you have a model and you're a very patient person, but very lazy, then you might try to use this, because you can type it in in a few minutes, but it's going to take a lot longer to run. And the reason for that is it's going to require many [INAUDIBLE] requires many more simulations. In fact, it just requires many simulations to estimate a single gradient-- policy gradient. Now, the next thing I'm going to say is a little more controversial, but most people would say that the limiting case, the best you could possibly do with these reinforce type algorithms-- are you raising your hand or just-- AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: No-- sorry. The best thing you can do with these reinforced algorithms, if you really-- the best performance you could expect is a shooting algorithm. And it's really, I should say, a first-order shooting method. It's really just doing gradient descent by doing trials. And when we talked about shooting methods, I actually said never do first-order shooting methods. I made a big point. I said never do this, never do this-- because if you go to the second-order methods, things converge faster. You don't have to pick learning rates. You can handle constraints. So there are people that do a bit of more second-order policy gradient algorithms, but that's not the standard yet. So you should really think of those as cool systems that, if you-- cool algorithms that, if you don't have a model, you can almost do a shooting method. Why do I say that's a controversial statement? Could you imagine somebody standing up and saying, this is actually better than doing gradient descent? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yeah. So the one advantage is that it's doing stochastic gradient descent. And there are people out there that really believe stochastic gradient descent can outperform even higher order methods in certain cases, just because of their ability to, by virtue of being random-- this is not some magical property we've endowed. This is [? Because ?] the algorithm is a little crazy. It bounces out of local minima. So for that reason, it does have all the strong optimization claims that a stochastic gradient descent algorithm has. There's another point to make, though, and I think John made this too. The performance of this-- and John's done nice-- written nice paper on this-- the performance you'd expect, meaning the number of trials it would take to learn to optimize your cost function-- the performance of these reinforced type algorithms-- it degrades with a number of parameters you're trying to tune. So remember, the fundamental idea was-- and the way I like to think of it is, imagine you're at a mixer station in a sound recording studio, and you're looking through the glass, and you've got a robot over there. You've got all your knobs set in some place, and your robot does its behavior, and then you give it a score. You turn your knobs a little bit, you see how the robot acts. You turn it off a little bit more. And your job is to just twist these knobs in a way that finds the way down the gradient, and gets your robot doing what you want to do. That maybe is a demystifying way to think about this everything, which is mathematically beautiful, but really, it's just turning knobs. If you have a model and you can compute the gradient, then you don't have to guess the way you turn knobs. You should always use that model to turn the knobs in the right direction. And also, if you think about that analogy, the number of-- the length of time it's going to take you to optimize your function is going to depend on how many knobs you have to turn. If I have 100 knobs in front of me and I change them all a little bit-- I see how my robot acted-- then it's going to be hard for me to figure out exactly which knob to assign credit to. The fewer knobs I have to change, the faster I can estimate which knobs were important, and climb down a gradient. I still say, when you have a model, you should always use it, because you can estimate the gradients. You can turn the knobs in the right way. But in the case where you don't have a model, it's actually-- they're very nice classes of algorithms. This knob tuning thing sounds ridiculous. Maybe, if I have even an [INAUDIBLE],, if you have a good model of the [INAUDIBLE],, then maybe you shouldn't-- you should definitely be using it. But if you have a very complicated system, and the performance only depends on the number of parameters, then it-- I just want to make the point that it's-- they're actually pretty powerful for some control problems. And the ones that we're working on in my group are fluid dynamics control problems, but specifically if you have problems where you can get away with the small number of parameters, but you have a very complicated unknown dynamics. And actually, those algorithms are really-- make a lot of sense to me. So the performance of these randomized policy search algorithms-- it goes with the number of parameters you're trying to tune. I could be sitting in this mixing station, and I could be twittering four parameters and having a simple pendulum do its thing, or I could be sitting in there turning these four knobs and having a Navier-Stokes simulation, with some very complicated fluid doing something, and the amount of time it takes me to twiddle those parameters is the same. One of the strongest properties of these algorithms is that, by virtue of ignoring the model, they're actually insensitive to the model complexity. So in my group, we're really trying to push-- in some problems where the dynamics are unknown and very complicated and a lot of the community is trying to build better models of this, we're trying to say, well, maybe before you have perfect models, we can do some of these model-free search algorithms to build good controllers without perfect models. Are people OK with that array of techniques? Yeah? You have a good arsenal of tools? Can you see the obvious place where I'm trying to go next, now that I've set it up like this? We did value methods and policy search methods for the simple case, then we did value methods and policy search methods for the stochastic case, then we did policy methods for the model-free case. So how about we do model-free value methods today? But I know it's a complicated web of algorithms, so I want to make sure that I stop and say that kind of stuff every once in a while. So what's the difference between a policy method and a value method? So value duration-- like I said, it's very, very efficient. The way we represented value iteration with a grid, and having to solve every possible state at every possible time, is the extreme form of value-- of the value methods. In general, we can try to build approximate value methods-- estimates of our value function that don't require the big discretization. So actually, last week, at the meeting-- one of the meetings I was at, I met Gerry Tesauro. And Gerry Tesauro is the guy who did TD-Gammon. Anybody heard of TD-Gammon? Yeah? [INAUDIBLE] knows TD-Gammon. I don't know what year it was. It was 20 years ago now. One of the big success stories for reinforcement learning was that they built a game player based on reinforcement learning that could play backgammon with the experts and beat the experts at backgammon. Now, backgammon's actually not a trivial game. It's got a huge state space-- huge state space. I don't play backgammon, but I know there's a lot of bits going around there. It's stochastic, because you roll a die every once in a while. So it's actually not some complicated-- not some simple game. In some ways, it's surprising that it was solved before checkers and these others. Maybe it's just because not enough people play backgammon, so you can beat the experts easier. I don't know. But we were playing competition style-- beat the best humans at backgammon-- well before checkers and chess, because of a value-based-- model-free, value-based method for backgammon. So Gerry Tesauro actually use neural networks, and he learned, from watching the game, a value function for the game. What does that mean? So what do you do when you play backgammon-- or whatever game you play? I'm not trying to dump on that game. I just haven't played it myself. So if you look at a go board or a chess board, you don't think about every single state that's possibly in there, but you're able to quickly look at the board and get a sense of if you're winning or losing. If you were to make this move, my life should get better. And there are serious people that think that the natural representation for very complicated physical control processes or very complicated game playing scenarios is to not learn actually the policy directly, but to just learn a sense of what's good and what's bad directly, learn a value function directly. And then we [INAUDIBLE] from value iteration. That captures all that's hard about-- that captures the entire long-term look ahead in the optimal control problem. Once I have a value function, if I have a value function I believe, if I want to make an action, all I have to do is think about, well, if I made this action, my value would get better by this much. If I made this action, my value would get better by this much. And I just pick the action that maximizes my expected value. Now, the good thing about value-based methods is that they tend to be very efficient. You can simultaneously think about lots of different states at a time. Just like value duration, it's very efficient to learn value methods. And historically, in the reinforcement learning world, nobody ever really did policy search methods until the early '90s. There was at least 15 years where people were doing cool things with robots, and game playing, and things like that, where almost everybody, every paper was talking about, how do you learn a value function? How do you learn a value function if you have to put in a function approximator? Or how do you do a value function if this, if this? So really, even though I did it second, this was actually the core of reinforcement learning for a long time. How do you learn a value function? How do you estimate the cost-to-go-- ideally, the optimal cost-to-go-- given trial and error experience with the robot? So that's today's problem. Good-- so we can make it easier by thinking about a sub-problem first. And that's really policy evaluation, which is the problem of, given I have-- I have my dynamics, of course, and some policy pi, I want to estimate or compute J of pi, the long-term potentially expected reward of executing that feedback policy on that robot, potentially from all states at all times. So this is maybe equivalent to what I just said about chess. So my value function for chess might look different than somebody who knows how to play chess. I look at the board, and most of the time, I'm losing, and my actions are going to be chosen differently, because I wouldn't even know what to do if my rook ended up over there. And the optimal value function, given I was acting optimally, might look very different. But for me, the first problem is just estimate what's my cost-- the cost of executing my current game playing strategy, my current control-- feedback controller on this robot, or this game? Now, there is something culturally different from the reinforcement learning value-based communities, and I'm going to go ahead and make that switch now. Most of the time, these things are infinite horizon discounted problems. I'll say it's discrete time just to keep it clean, because then it's easy to write [INAUDIBLE] equals t to infinity here, gamma to the-- let me just do it like this. Let's assume that it's completely feedback. That'll just keep me writing less symbols for the rest of the lecture here. 0 to infinity, gamma to the n, xn, and then pi of xn, where my action is always pulled directly from pi-- I mentioned it once before, but why do people do discounted things? Lots of reasons why people do discounted things-- first of all, if you have infinite horizon rewards, there's just a practical issue. If you're not careful, infinite horizon rewards will blow up on you. So if you put some sort of decaying factor gammas, typically, it's constrained to be less than 1 just so you don't have to worry about things blowing up in the long term. But you can make it 1, and then you just have to be more careful that you get to a fixed point [INAUDIBLE] cost, or whatever it is. Let's just put some decaying cost on future experiences. Philosophically, some people really like this. So a lot of the problems we've talked about are very episodic in nature. We talked about designing trajectories from time 0 to time final. What's the optimal thing? What's the optimal thing? If you just want to live your life-- presumably, you don't know exactly when you're going to die. You're going to maximize some long-term reward. You'd like it to be infinite, but realistically, the things that are going to happen to me tomorrow are more important to me that the things that are happening in the very far, distant future. So some people, philosophically, just like having this as a cost function for a robot that's alive executing an online policy, worrying about short-term things a little bit more, but thinking about into the future. And that knob is controlled by gamma. Almost all of the RL tools can be made compatible with the episodic non-discounted cases, but culturally, like I said, they're almost always written in this form, so I thought it'd makes sense to switch to that form for a little bit. So how do we estimate J pi of x, given that kind of a setup? Let's do the model-based case, just as a first case. Let's say I have a good model. I made it look deterministic here, but we can, in general, do this for stochastic things. Let me do the model-based Markov chain version first. So you remember, in general, we said that the optimal control problem for discrete states, discrete actions, stochastic transitions looked like a Markov decision process, where we have some discrete state space, we have a probability transition matrix, where T-I-J is probability of transitioning from I to J. And we have some cost. And in the graph sense, I tend to write-- we tend to write the cost as-- instead of being [INAUDIBLE] action, we can just write it as the probability of-- the cost of transitioning from state I to state J. Good-- now, in the Markov decision processes that we talked about before, the transition matrix was a function of the action you chose. Your goal was to choose the action, which made your transition matrices have the optimal-- choose the best transition matrices for your problem. In policy evaluation, where we're saying, we're trying to figure out the probability of the cost-to-go of running this policy, then the actions are-- the parameterization by action disappears again. It's not a Markov decision process. It falls back right into being a Markov chain. So it's a simple picture now. We have a graph there's some probabilities of transitioning from each state to each action, from each state, because my actions are predetermined. If I'm in some state, I'm going to take this action based on pi. And each transition incurs some cost, and my goal is to move around the graph in a way that incurs minimal long-term cost and expected value. So that's a good way to start figuring out how to do policy evaluation. So now, in this discrete state transition matrix [INAUDIBLE] this form, I'm going to rewrite J pi as being a function of i, where i is from-- i is drawn from S, some-- it's one discrete state. And it's the expected value of G-I-N to I [INAUDIBLE] plus 1. I should say another funny example I just remembered. So I gave an analogy of playing a game. You might look at the board and figure out what's the value of being in certain states. People think it's relevant in your brains too. So there's actually a lot of work in neuroscience these days which probes activity of certain neurons in your brain, and finds neurons that basically respond with the expected value of your cost-to-go function. They have monkeys doing these tasks, where they pull levers or blink at the right time, and get certain rewards. And there's neurons that fire correlated with their expected reward in ways that are-- they design an experiment so it doesn't look like something that's correlated with the action they're going to choose, but it does look like it's correlated with expected reward. And interestingly, when they learn-- as the monkeys learn during the task, you can actually see that they start making predictions accurately when they're close to the reward. They're about to get juice, and then, a few minutes later, they can predict when they're a minute away from getting juice. And then, if you look at it a couple of days in, they're able to predict when they're a half hour from getting juice or something like this. I think the structure of trying to learn the value function is very real, especially if you're a juice-deprived monkey. So let's continue on here. How do you compute J pi, given this equation? J is a vector now. Well, first of all the dynamic programming recursion let's us write it like this. J of ik is the expected value of taking one step-- the one step cost plus the discount factor times the future. The reason people choose this form for the discount factor is that the Bellman recursion just looks like that. You just put a gamma in front of everything. We can take the expected value of this with our Markov chain notation and say it's the sum over i k plus 1's of Tik ik plus 1 times g ik, ik plus 1. Keep putting pi everywhere so we remember that. Pi k plus 1. The expected value just is a sum over probabilities of getting each of the outcomes. So you can use that transition matrix. And in vector form, since I have a finite number of discrete states, I can just write that as J is g plus gamma TJ, where the i-th element of g is Keep my pi's everywhere. Everybody agree with those steps? OK. So what's J-- J pi? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Mm-hmm. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: I have to go to a vector form for J, so I just put it over here. I'm saying that the i-th element of the vector g-- this is my vector g now-- and the i-th element of my vector g has that T in there-- absolutely. Yep. So it's the expected value of g there. OK, so what's J pi? So lo and behold, policy evaluation on a Markov chain with known probabilities is trivial. It's this. It's almost free to compute. I could tell you exactly what my long-term cost is going to be just by knowing my transition matrix. That's something I think we forget, because we're going to get into models that look more complicated than that, but remember, if the transition matrix, it's trivial to compute the long-term cost for a Markov chain. So let me just show you why that's relevant, for instance. All right, so I told you about this the day the clock stopped. I kept telling you about it [INAUDIBLE] And for the record, do you know what happened that day? The clock physically stopped. Michael debugged it. There was a little piece of paint that blocked it at exactly 3:05 the day I was giving that lecture. That was a hard one to catch, to be fair. So one of my favorite models of stochastic processes in discrete time, for instance, is taking our rimless wheels, our passive walking models, and putting them on rough terrain. So this is the rimless wheel, where now, every time it takes a step, the ramp angle is drawn from some distribution. Now, in real life, maybe you don't roll rimless wheels on that kind of slope, but the contention in that paper was that actually, every floor is rough terrain, and you actually have to worry about the stochastic dynamics all the time. And if you want to take-- you can take your compass-gait model and put it on rough terrain, and you could take the [? kneed ?] model and put it on rough terrain. These are the passive things, so they can't walk on very much rough terrain before they fall down. But they can. They can walk on rough terrain. And then you want to ask complicated questions about this, maybe. You want to say, given my terrain was drawn from some distribution, how far should I expect my robot to walk before it falls down? That sounds like a hard question to answer. It's trivial to answer, actually. So this equation is exactly what drove that work. We built the transition matrix on the [INAUDIBLE] map, saying, given it's passive-- there's no actions to choose from-- given it's passive, what's the probability of being in this new state, given the terrain's drawn from some distribution given it's at a current state. The cost function was 1 if it keeps taking a step, 0 if it fell over. And you compute this, and it-- what does it tells you? It tells you the expected number of steps until you fall down-- period. On shot. Simple calculation. It's so simple. The bad part is you have to discretize your state space to do it. But if you're willing to discretize your state space, then you can make very long-term predictions about your model with-- just like that, to the point where we are trying to say that people who talk about stability-- people are coming up with metrics for stability and walking systems. They say, why not just do this? Why not actually compute, given some model of the terrain, how many steps you'd expect to take until it falls down? That's what you'd like to compute, and it's not hard to compute, so you should do that. So that's a clear place where policy evaluation by itself-- there's lots of cases where you have a robot that's doing something, it's got a control system, and you just want to verify how well it works. If you're trying to verify it an expected value, it's easy. Just do the Monte Carlo-- or sorry-- the Markov chain thing. But what happens if I don't have a model? That's what we're supposed to be talking about today. Can we do the same thing if we don't have a model? I had to know T. I had to know the-- all the transition probabilities in order to make that calculation. What happens if we don't have a model-- we just have a robot we can run a bunch of times? How do you do it? What would you do, if I asked you-- I say, I like your robot. I want to know how long it tends to run before it fails. How would you do it? How would you do it? There's an easy answer. You could run it a bunch of times and take an average. We know that these value functions are state-dependent, so it's a little more painful than that. Technically, you're going to have to run it a bunch of times from every single initial condition, but you could do that. And actually, that's not totally crazy. So I want to know how much cost I'm going to incur-- in the case of the walking robot, how many steps it's going to take on average before it falls down. First thing to try-- don't have to know the transition matrices-- just run it a bunch of times. So if I say Jn i, the n-th time I run my robot, I incur-- I just keep track of the cost. I keep track of how many steps it took. I keep track of how much gold it found-- whatever your cost function is. The thing I'm trying to estimate is the expected value of that long-term cost. But any one trial-- I get this thing out as a random variable. I could take the expected value of the random variable. I can make a nice estimate of J pi i by just running it a bunch of times and taking-- doesn't sound very elegant, but it works. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: What? Sum over k. Good. Thank you, thank you. Good. Sum over k. [INAUDIBLE] you've corrected both simultaneously. OK, so a couple of nuances here-- so first of all, I have an infinite horizon cost function. So this is only going to be an approximation, because I'm not going to run this forever 10 times. I'm going to run it for some finite duration 10 times. So in practice, I'm actually going to run something that's big number. But that's OK because this discount factor means that a finite trial approximation should be a pretty good estimate of the long-term. And if I run it from initial condition i long enough, then I should be able to take an average and get the expected [INAUDIBLE].. There's lots of ways you can do that kind of thing if you don't want to do all the bookkeeping of remembering where you've been. You don't have to remember all of these things. You can do an online version, incremental. You can say that my J hat is just-- my J hat pi is just J hat pi-- I can guess an initial J hat, and then, every time I get a new trial, I'll just move my estimate towards that trial. And this is actually an online version that approximates that in batch. This is just a standard [INAUDIBLE] that you can do it more carefully. I could choose these to be a perfect weighting but in general, this is actually a pretty good approximation, as the number of trials goes up, to this sum without keeping track of every J. Every time, I'm just going to do moving average towards the new point, and by changing a small amount, it will converge. This is a low-pass filter. That's another way to say it. It's a low-pass filter that tries to get me to-- the mean of the J samples I'm getting in. So that gets rid of a little bit of bookkeeping. There's other things you can do. Now, here's a really cool one. Think about this, and tell me if you think it's possible. I'm going to tell you in a minute how that-- if I have two policies, I can-- say, pi 1 and pi 2-- Do you believe that? It's going to take a little bit more machinery, but just to see where we go. Say I have two control systems. I have the one that is risky, and I ran it once, and the thing fell down. So I don't actually want to run that 100 times. I might break my robot. Let's say I've got a different policy that I like a little better. It's a little safer to do evaluations on. Can you imagine running the safe policy, let's say, to learn about the risky policy? That's pretty cool idea, right? What is wrong with this? Typically done with a q function. I'll show you how to say that in a second. So there's lots of ways you can do that. You can run trials, you can keep averages, you can try to learn about one trial by learning the other. What the fundamental idea here is is that it requires stochasticity. You need that, in policy, pi 1 and pi 2 have to change-- take the same actions with some non-zero probability. Pi 2 might be my risky policy, and every once in a while, with some small probability, it takes a safe action, let's say. And pi 1 is my safe policy, but every once in a while, it takes a risky action. As long as these things have some non-zero overlap in probability space, then I can actually learn about what it would have been to do the more risky thing by taking the more conservative thing. So policy evaluation's a really nice tool. But this feels slow. The Monte Carlo thing feels slow-- feels like I got to run a lot of trials from a lot of different initial conditions. And now you tell me what the cost-to-go is from this initial condition, and let's say I try this initial condition. What do I do? Do I just have to start over and run trials from the get-go again? Well, that doesn't seem very satisfying. Approach number two is bootstrapping. I call it bootstrapping. If I learned about the cost of being in this state, and I spent a long time learning about the cost-to-go of being in this state, and then I go back and ask what's the cost of being in this state, if this one transitions into this one, then I should be able to reuse what I learned about the state to make it faster to learn about that state. I didn't really plan to do it with the steps on the floor, but I hope that makes sense. Maybe I could do it on a graph. That's better, yeah? Let's say I figured out what J pi of this state is-- because I went from here, and I went around, and I did my stuff, and I learned pretty much what there is to learn about here [INAUDIBLE] policy. And now I want to know about this state. Well, I should be able to reuse the fact that I've learned about that to help me learn this more quickly-- reasonable idea. Using your estimate to inform your future estimates is an idea about bootstrapping, reusing-- building on your current guess to build a better future guess. And here's how it could look in the optimal control policy evaluation sense. What if I said my online rule used to be this, where I've got some estimate J pi hat? I'm going to run from 0 to some very large number to estimate this, and then make the update. What if, instead, I just took a single step and I did this update? Does that make sense to you? Let's say I ask you to guess the long-term cost here. Instead of running all the way to the end, what if I just run a single step and then use as my cost my estimate for this, the cost of going here plus the gamma times the cost of doing all that? It's just using this one-step cost as an estimate for when I was going J-N of ik plus 1-- or sorry, J-N of ik. Does that makes sense? If I find myself in a lot of different initial conditions, I could take one step and then use my guess for the cost-to-go from that step to the rest of the time. Now, this starts feeling a lot more appealing, actually, because now I don't have to think-- this actually got rid of that whole episodic problem. I don't have to go in and run some fixed length trial to approximate the long-term thing. I just take a single step, use this as my estimate, and I can just keep moving through my Markov chain. I don't have to ever reset. And potentially, if I visit states often enough-- I won't get into all the details-- roughly, it involves that Markov chain being-- having ergodicity. you have to be able to visit all the states with some non-zero probability as you go along. But if you visit the states-- each state infinitely often is roughly the thing-- then this actually will converge to J pi of ik. So the ergodicity is actually bad news for my walking robot, because if my walking robot falls down, I'm going have to pick it back up if I want to get ergodicity back. There are robots that don't visit every state every arbitrarily often. But in the Markov chain sense, that doesn't seem like such a bad assumption. And if I'm willing to take my robot when it falls down and pick it back up-- which, by the way, is about how I spent the last year of my PhD-- then actually, I can get ergodicity back. OK, cool-- so that makes sense, right? I'm going to use my existing estimate of the cost-to-go to bootstrap my algorithm for estimating the cost-to-go. Yeah? AUDIENCE: Does the transition [INAUDIBLE] come into play at all? RUSS TEDRAKE: It does, because I'm getting this from sampled data. So this is actually drawn. The expected value of this update does the right thing. So this update doesn't have it, because this is from a real trials. But you should think about this as a sample from the real distribution. Now, that's actually a really good way for me to lead into the next step. These algorithms tend to be a lot faster in practice than those algorithms-- not only are they a little bit more elegant, because you don't have to reset and run finite-length trials-- they tend to be a lot faster. And the reason for that is this here is really the-- it has the expected value of future costs built into it. Let me say that in the pictures. There's two ways, I could estimate this. I could get here and then I could take a single path. Well, this one is not rich enough for me to make my point here, but OK-- so I could take a single path through here and get a single sample estimating the long-term cost. But if I instead use J pi, J pi is the expected value of going around and living in this. So by using this update to bootstrap, or if I just take one step from here, I get for free the expected value of living over here for a long time. Does that make sense? So J is building up a map of the expected value, because it's visiting things often and it's-- drew this online algorithm with this low-pass filter. He's basically doing an expected value calculation. By using my low-pass filtered [INAUDIBLE] in here, it's also-- it's getting the reward of-- maybe you could just say it's filtering faster. That's actually not a bad way to think about it. I've already got this thing filtered, so this one filters faster. That's a pretty reasonable way to think about it, actually. OK. So this quantity here in the brackets, this whole guy right here, it's a very important quantity, it comes up a lot. It's called the temporal difference error. It's the difference that I get from executing my policy for one step and then using the long-term estimate compared to what I have is my long-term estimate, temporal difference error. Now, if the system was deterministic and I had already converged, then that temporal difference there should be 0, because this thing should be exactly-- predict the long-term thing. If the system's stochastic, then this temporal difference error should be 0, on average. It's comparing my cost-to-go from ik, given my 1 step plus the cost-to-go from ik plus 1. So those things-- you want those to match, right? You want that my 1 step plus long-term prediction should match my long-term prediction, if things are right. They should match an expected value. So that thing's called the temporal difference error, and it's an important quantity in reinforcement learning. It makes sense to write that down. That's a reasonable estimate for J. n would be to take one step and then do the other one, but there's-- that seems a little bit arbitrary. Why don't I just do one step and then use my lookup? Thi sis the way Rich says it. Why not do two steps and then use my value function to look up? Or three steps-- why not take, similarly, three steps, and then use that to look it up-- or 14 steps or something like that. Why should I arbitrarily pick this one real data and then look ahead, instead of two real pieces of data and my look ahead? Well, there's no reason that you actually have to pick like that. Can I just write the inside part here? I could have estimated Jn ik as gik ik plus 1 plus gamma gik plus 1 ik plus 2 plus gamma squared J pi at ik plus 2. That's a perfectly good approximation too. I could have done three-step. I could have done four-step. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Say it again. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Yeah I. I haven't been writing it that way because-- yeah. I would have been fine writing it that way. At some point, I decided to not write pi there, and I'll just stay consistent by not writing that. OK, so Rich [INAUDIBLE] came up with a clever algorithm that's-- basically allows you to seamlessly pick between the one step, two step, three step, n-step look ahead with a single knob. And it works. It's called the TD lambda algorithm. And the basic idea is that you want to combine a lot of these different updates into a single update. It sounds really bizarre [INAUDIBLE] so let me just say it. Let's say I call my estimate J of Jn, with M step look ahead of ik. M equals 0 to M gamma of M gik plus M ik plus M plus 1. Big M. This is a big M. Big M. Everything else is little m's. That was the one step. This is the two step. In general, this is the M step look ahead. So it turns out there's actually an efficient way to compute this. Let's call it something else-- p, p, p. This one takes a little time to digest. But it turns out it's pretty efficient to calculate a weighted sum of the one step, two step, three step-- onto forever-- sum of look-aheads parameterized by another parameter, lambda. So when lambda's 1, this thing turns out to be basically doing Monte Carlo. And when lambda's 0, this thing basically is doing just the one-step look ahead. And when lambda is somewhere in between, it's doing some look ahead using a few steps. Does that makes sense at all? It's a lot of terms flying around here. Even if you don't completely love that, just think of my estimate, J of lambda being awaited, basically something that, where if lambda is 1, it's going to be the very long-term look ahead. If lambda is 0, it's going to be the very short-term look ahead. And there's a continuum in between, a continuous knob I can turn to say how far I'm going to look ahead into the future as my estimate that I'm going to use in that TD error. And there's a whole gamut in between. AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Can I say it again? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: Or can I read it? 1 minus lambda i just a the normalization factor. p equals 1 to infinity. Lambda the p minus 1 J p-- where this is the p step look ahead. So this is a very famous algorithm-- the TD lambda algorithm-- which allows you to do policy evaluation without knowing the transition matrix, doing bootstrapping or Monte Carlo in a simple single framework with just a parameter lambda to evaluate. So it's a tweak. And it turns out it uses an eligibility trace, just like in reinforce. Did you get the eligibility traces, John? AUDIENCE: [INAUDIBLE] RUSS TEDRAKE: OK. Well, that's fine. So it turns out to have a really simple form. I'll write it, because it's so simple, but it'll also be in the notes, if you want to spend more time with it here. OK, two observations-- first of all, this looks no harder [INAUDIBLE] than the original version I had, pretty much. It just requires one extra variable, which is this eligibility trace. What does the eligibility trace look like? OK. It starts off at 0. There's an element for every note in the graph. Every time I visit the graph-- that node in the graph, I-- it goes up by 1, and then it starts forgetting based on gamma and lambda, as this discount factor. And then, the next time I visit it, it goes up by 1. If I visit it a lot, it can build up like this. It's just a trace of memory of when I visited this cell. Does that makes sense, this dynamics here? Every time I visit the cell, it goes up by 1, and always, it's going down exponentially. It turns out, if you just remember that, the way that you've visited cells in the past, decade by this lambda-- as well as gamma-- but this lambda-- which is the new term-- then it's enough to [INAUDIBLE] this trivial update here, scaled by the-- how often I visited that cell recently. Is it enough to accomplish this seemingly bizarre combination of short and long-term look-aheads. So it's a really simple, really beautiful algorithm. Just remember how-- when I visited these cells, and then make this TD error update scaled by that, and I've got the TD lambda algorithm. And what people can do is they-- people can prove that TD lambda converges to the TD lambda update that J hat will go to J pi from any initial conditions. So you can just guess J randomly to begin with. And if I run it, as I visit all these states arbitrarily often, it still makes that ergodicity assumption. Then I'll get my policy evaluation out. That's really cool-- simple algorithm. Now, what people also realize is that, when you start out, and J is randomly initialized, then it makes a lot of sense to set lambda close to 1, because bootstrapping has less value when I just start out. My estimate is bad everywhere, so why should I use my bad estimate as my predictor? So you start off-- you keep lambda close to 1. It does very long-term. It does more Monte Carlo style updates. And as this estimate starts converging to the good estimate, you start turning lambda down. And with a cleverly tuned timing of lambda, you can get very fast convergence compared to the Monte Carlo algorithms. You more and more bootstrap. Excellent. Well, time's up. The clock is still moving today, so I have to stop. So the really cool thing-- we only talked about policy evaluation today. The next step is, how do you do these value methods to improve your policy? And it turns, out in many cases, if you make a current estimate of your value function and then, on every step, you try to do the greedy policy, epsilon greedy policy, you basically-- you mostly exploit your current estimate of the value function, then you can still prove that these things-- at least on the grid, the Markov chain case-- can get to their optimal-- the optimal value function and the optimal policy. So we'll finish that up next time and get into the more interesting-- get rid of these Markov chains to try to get back to the real world. Good. OK, see you Tuesday. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_34_Laura_Schulz_Childrens_Sensitivity_to_Cost_and_Value_of_Information.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LAURA SCHULZ: So what you've heard by now is that the hard problem of cognitive science turns out to be the problem of commonsense reasoning. Our computers can drive. They can perform fabulous calculations. They can beat us at Jeopardy! But when it comes to all of the really hard problems of cognitive science, there's only one organism that solves them, and that's a human child. And those are problems of face recognition, scene recognition, motor planning, natural language acquisition, causal reasoning, theory of mind, moral reasoning. All of those are solved in largely unsupervised learning by children by the age of five. And those are the things our computers don't do well. And that is largely because the problem of human intelligence, common sense intelligence, is a problem of drawing really rich inferences that are massively underdetermined by the data. So to make that point really clear, I'm to give you all a pop intelligence quiz, OK? So here's the pop intelligence quiz. Can I see a show of hands, please-- how many of you think that I have a spleen? Can I see a show of hands? Excellent. How many of you would care-- keep your hands down if you're an M.D. But other than that, how many of you would care to come up here and diagram, for the class, a spleen, and explain what it is, where it is, and what its exact function is in the human body? Anyone? How many of you have met me before? A few of you. But most of you, without knowing anything about spleens, and not knowing anything about me, are nonetheless extremely confident that I have one. So you have a lot of abstract knowledge in the absence of really much in the way of specific, concrete facts. OK, let me give you another problem. Of course, it's a kind of classic one from-- by the way, for those of you who are curious, that's a spleen. All right, classic-- these problems crop up in every aspect of human cognition, all right? What's behind the rectangle? AUDIENCE: [INAUDIBLE] LAURA SCHULZ: Right. You all know. You can't articulate it, but you know. And the fact that there are infinitely many other hypotheses consistent with the data doesn't trouble you at all, doesn't stop you from converging, collectively, on a single answer, which means there has to be a lot of constraints on how you're interpreting this kind of data. Here's another one. Complete the sentence. AUDIENCE: Very long neck. LAURA SCHULZ: Very long neck-- could be, or temper, or flight to Kenya. There are many things that it could be. In this case, it looks like a frequency issue. Oh, well, "neck" is a very common word. Others aren't. But if I said, "giraffes are really common on the African--" you would say "savannah," not "television," OK? So you're using a lot of rich information to take a tiny bit of data and draw rich inferences. And that is the problem, and the hard problem, of common sense intelligence, and I think, a real dissociation between what we do and what our children do. Human intelligence uses abstract structured representations to constrain the hypotheses and make these kind of outrageous inferences that we shouldn't be able to make. That's all well and good, but where do these abstract structured representations come from? And there's only two possibilities-- we're born with them, or we learn them. Liz Spelke has already told you a lot about the reasons to think that we are born with many of them, and so true for anything that might be stable over evolutionary time, that might emerge early in ontogeny and broadly in phylogeny. And that's true for many aspects of folk physics, folk psychology, causal reasoning, navigation, number. But there's a lot of other things you know. Shoes, ships, sealing wax-- basically everything else-- they are not plausibly innate. And so how do you learn them? So Piaget, the founder of our field of developmental psychology, says you build these up from experience. You build them up starting with sensory motor representations very, very gradually. You progress through a lot of concrete information until finally, somewhere around the age of 12, you get to abstract representations of the world. But it turns out that just like you and your spleens, children have a lot of abstract knowledge before they have much of this concrete information. They know, actually, almost nothing-- even less than you know, it turns out-- about anatomy and biology. But they know all of these kinds of things. Animals have insides. Similar kinds of animals have similar insides. Plants and objects don't. Removing those insides is a bad idea usually. They can go on and on without really understanding anything about anatomy or biology. And so for you. If I push you hard on all kinds of things-- how does a scissors really work-- you know, most of you would be like, ahh, and [INAUDIBLE] pretty quick, OK? And we know this. So we have these intuitive theories that seem to constrain our hypothesis space. But it's a really hard chicken and egg problem, because we've said we need these rich abstract theories to constrain the interpretation of data. But how do we learn them if we don't have concrete information? And nonetheless, some of them are learned. We're going to return to that at the end of the talk. And really, I'm going to do that to set up Josh and Tomer, who are going to talk about that. But first, what I'm going to talk about is-- you know, OK, why am I talking about this as an intuitive theory? Where is this argument coming from? I'm interested in learning. And it's a research program that emerged against the backdrop of two revolutions in our understanding of cognitive development. One was the infancy revolution. You've heard a lot about it, so I'm going to go very quickly. Babies, it turns out, know a lot more than we thought about objects, and about number, and about agents, and their goals, and their intentions. It's not just infants though. It turns out that very young children, preschoolers, also represent knowledge that is not plausibly innate in ways that are abstract, that are coherent, that are causal, that support prediction, and intervention, and explanation, and counterfactual reasoning in ways that seem to justify referring to them as intuitive theories. And together, these two revolutions, the infancy revolution and the revolution in our understanding of early childhood, dismantled Piagetian stage theory. There was never a time-- there is never a time-- in development when babies are only sensory motor learners. There is never a time when there is not some level of abstract representation going on. But fundamentally, neither of these revolutions was about learning per se. And I think I can make that point most clear by pointing to a popular book that came out at the time. The subtitle here is Minds, Brains, and How Children Learn. But in fact, if you look in the book, this was a publisher's title. There's nothing about brains. And there's very little about learning. The titles are what children know about objects, what children know about agents. And I can say this with great reverence and some authority, because the first author's my thesis advisor. And the book came out in 1999, which is the year I started graduate school. So literally and metaphorically, this is where I began. I began with this metaphor of the child as scientist. And it's a really problematic metaphor, because science is a historically and culturally specific practice that is practiced by a tiny minority of the human species and is difficult even for us, right? So it seems a really odd place to look for a universal metaphor for cognitive development. But science has this peculiar property, which is that it gets the world right. And if you really want to understand how new knowledge is possible, how you could get the world right, you might want to understand how scientists do it, what kinds of epistemic practices might support learning and discovery. And the answer to that is both that we do and do not know, which is to say we can say a lot of things about what scientists do. Here are some of them. They'll all be familiar to you. They would be familiar to you if you were a physicist, if you were in aero-astro, if you were in paleontology. These are the kinds of scientific practices that cut across content domains and arguably define what science is, which is to say, if you did all of these things, you couldn't necessarily do science. Science requires bringing these inferential processes to bear on really rich, specific, conceptual representations of individual content domains. But arguably, if you had all of that rich, specific content knowledge and you didn't do these things, you couldn't learn anything at all. These are the kinds of epistemic practices that seem fundamental to inquiry and discovery. And the argument that I've made in my research program is they are fundamental not just in science, they are fundamental in cognitive development. These are the processes that support learning and discovery. And there's good evidence that each and every one of them emerges in some form in the first few years of life. And that is because these are the only rational processes we know of that can solve the hard problem of learning, which is exactly the problem of how to draw rich abstract entrances rapidly and accurately from sparse, noisy data. I said we could characterize these practices both formally and informally. And indeed, as you've heard, I think, from some of our computational modeling colleagues, for each and every one of these practices, we can begin to characterize something about what it means to distinguish genuine causes from spurious associations or to optimize information gain. But with all due respect to my computational modeling colleagues, and much as we want really simple models-- Hebb's rule, Rescorla-Wagner, Bayes' law-- that would capture it, none of these do justice to what children can do. Because children can do all of these things. And we don't yet have a full formal theory of hypothesis, generation, inquiry, and discovery. That remains a hard problem of cognitive science. But it's a problem to which I think our theories should aspire. Because there is good empirical data that this is the kind of learning that humans, including even very young children, engage in. So normally, at this point, what I would do is I'd say, and I'm going to show you a few examples of this from my research program. But the talk I'm giving here is a sort of funny throwback talk in some ways. What I was asked to talk about today was the child as scientist. And that was a research program from a few, few years ago. And you know, at that point, I was a junior professor. And you know, when you're a junior professor, like when you're a graduate student or post-doc, it's all idealistic science for the sake of knowledge, pure science. And then you get tenure. And it's all grants, and money, and administration, and allocation of resources. And in the years since, I have started to think not just about the pure science of learning, but about the cost associated with gaining information and the trade-offs between those costs and rewards. So I'm going to take these same practices now, and situate them in a world, a real world, that has certain kinds of trade-offs in how you think about information and talk, along with the child as scientist, about what those costs do and how they are also, in themselves, a source of information about the world. So I'm going to talk about this as sort of inferential economics. Children know information is valuable. I'm going to show you a couple of examples of how they reason about it. And children selectively explore in ways that support information gain. So I'm going to show you some old work, but I'm also going to throw in a few new studies, because I can't resist. But information is also costly. And the costs themselves are informative in a variety of ways. I'm not necessarily going to get through all of these studies, although I might try to. But I want to give you a kind of feel for the kinds of things children can do. All right, so let's start by talking about a really basic problem. It's a problem that's basic to science, but it's also a problem basic to human learning, which is the problem of generalization. How do you generalize from sparse data? In science, we do this all the time. We have a small sample. We want to make a claim about the population as a whole. And of course, we can use feature similarity and category membership to say things that look the same or belong to the same kind are likely to share properties. So if some of these Martian rocks have high concentrations of silica, maybe they all do. If some of these needles on Pacific silver fir trees grow flat on the branch, maybe they all do. But in science, we can also do something a little more fussy and suspicious. We can say, well, you know, it kind of depends on how you got that sample of evidence, right? If you randomly sampled from the population, yeah, sure, the properties, you can generalize. But if you cherry-picked that data in some way, maybe the sample isn't going to generalize quite as broadly. So do all Martian rocks have high concentrations of silica or just the dusty ones on the surface? Do all Pacific silver fir needles lie flat or just those low on the canopy, right? These are the ones that are easy to sample from. How do I know how generalizable the property is? And if I think that you cherry-picked your sample, I might constrain my inferences only to things near the ground. So how far you're going to extend a generalization in science depends on whether you think that the sampling process was random or selective. And we wanted to know whether this was true for babies as well. So this is how we asked. We showed babies a population, in this case, of blue and yellow dog toys. They're in a box. The box is transparent, has a false front so it stays a less stable representation of what looks like a lot of balls. And we're going to reach into that box. And we're going to pull out-- there are many more blue balls in this box than yellow balls. We're going to pull out three blue balls one at a time and squeak them-- and squeeze them-- and they're going to squeak. And then we're going to hand the baby a yellow ball from the same box. And the question is, does the baby squeeze the ball and expect it to squeak? Well, there's nothing very suspicious about pulling three blue balls from a box of mostly blue balls. And this has a lot of feature properties in common. So it looks like the others. We predict that children should generalize. They should try squeezing this ball and should squeeze often. And the question is, what happens if you do exactly the same sample in exactly the same way from a different population? Now, it is very unlikely that you sampled three blue balls from a population of mostly yellow balls. In this case, it's much more likely that you were sampling selectively. So maybe only the blue balls had the property. Yeah? AUDIENCE: And they can see the population? LAURA SCHULZ: They can see the population-- transparent box, transparent front. So they can see the population. And if children understand that it's not just about the property similarity but something about how that evidence was generated, then, in this case, children should say, well look, you just looked like you were cherry-picking your sample. Maybe it doesn't generalize predictions that fewer children should try squeaking and children should squeeze less often. So I'm going to show you what this looks like. By the way, the yellow one has that funny thing at the end so that children could do something else with the ball, right? So they can bang it, or throw it around, or other things like that. So let me show you what it looks like. Kids are always going to see three squeaky blue balls. They're always going to get a yellow one. But you'll see that they do different things depending on whether they think-- [VIDEO PLAYBACK] LAURA SCHULZ: --the evidence was randomly sampled and possibly generalizable or not. So the child at the top is squeezing, and squeezing, and squeezing, and squeezing. [END PLAYBACK] So these were-- this was true both of the mean number of squeezes and the number of individual children who squeezed at all. I'm just going to show you the mean number of squeezes. But what you'll see is that children are much more likely to squeeze, and squeeze persistently, in the condition where the evidence looks like it was randomly sampled than selectively sampled. But what you could worry about here is children are sensitive to something about the relationship of the sample and the population. But maybe they will just generalize from a majority object to a minority, but not the reverse. Maybe they won't generalize from a minority object to the majority. So they don't really care about whether the evidence was randomly sampled or not, they just care about that aspect of the sample and population. So we ran a replication of the yellow ball condition. Again, we're going to pull an unlikely sample from that box, three blue balls in a row. And we're going to compare it with a sample that's not that improbable. You could easily randomly sample just one blue ball from the box. So in this case, children are going to see much less squeezing, right? They're only going to see one blue ball squeezed. And we squeeze it both once and three times, but it's only one blue ball in two different conditions. And the prediction there is that even though the children themselves are seeing much less squeezing, they should say, well, that's not an improbable sample. And they, themselves, should squeeze more. And that's exactly what we found in both of those conditions. It's graded, by the way. If you do two balls, they're intermediate. And if it's a model-- well, not going to talk about-- but yeah, what happens if you just pour them upside down and drop them? Now this is a really improbable sample I said, three blue balls from a yellow box. But you've just given positive evidence that you shook the ball, and it just happened to fall out. And they don't know we're MIT, and we can do sneaky technological things like have a trap door. So in this case, it's an improbable sample, but it was randomly generated. And the prediction is, in this case, the babies themselves should squeeze more. Because as I say, if you can pour out any balls that squeak, probably everything squeaks. And indeed, they do. Indeed, they do. All right? So 15-month-old babies' generalizations take into account more than category membership and the perceptual similarity of objects. They make graded inferences that are sensitive both to the amount of evidence they observe and to the process by which that evidence is sampled. Is that clear? All right, let me show you another example of sort of child as scientist. It's going to start with a hard problem of confounding that we all have, which is that we are part of the world. So in one-offs, when things go wrong, we may not know if we were responsible or the world was responsible, right? This is a chronic problem in relationships, right-- you or me? So it's a hard problem of confounding, and you might need some data to disambiguate it. So here we're going to give babies a case where they cannot do something. And the question is, can we give them a little bit of data to unconfound that problem and convince them either that the problem is probably with the toy or the problem is with themselves? And the argument is, if they think that it's themselves that's the problem-- it's the agent state and not the environment state-- they should hold the toy constant and change the agent. But if they think it's the toy, then they should just go ahead and reach for another toy. So in both cases, they're going to have access to another person, their mom. And they're going to have access to another toy. And so the question is, what do they do if we give them minimal statistical data to disambiguate these, OK? So this is the setup. We're going to show the babies two agents. In one condition, I am going to succeed one time at making that toy go and fail one time. And Hyowon Gweon, my collaborator on this project, is also going to fail and succeed once. So this looks like this toy has maybe faulty wiring or something. It works some of the time, not all the time. It's just not a great toy. The babies can have another toy. The parents are going to be there. If they think it's the toy, they should change the object. In the other condition, Hyowon is always going to succeed, which is generally true in my experience. And as is always true of my experience in technology, I am always going to fail. And in this case, the children should conclude that there's something wrong with the person. And if that's the case, they should hold the object constant and change the agent. This is what it looks like. [VIDEO PLAYBACK] - [INAUDIBLE] LAURA SCHULZ: We've lost sound. Well there's an audio here, but [INAUDIBLE] we are going to want that later. - Cool What happened to my toys? LAURA SCHULZ: In any case, were just showing them the data at this point. And the babies are going to get the toy in each condition. One toy is on the mat. By the way, there are lots of individual differences in any two sets of clips between the exact positioning of the parent, the child, the toy. All of these were coded blind conditions for all of these other variables to make sure those were evenly matched across conditions, and they were. So what you're going to see now though is that in the condition where the babies think it is probably the toy, they engage in a very different behavior than when they think it is probably agent. - [CRYING] - [INAUDIBLE] [END PLAYBACK] And that's what we found overall. The distribution, overall, of children's tendency to perform one action versus another differed across conditions depending on the pattern of data that they observed. So 16-month-olds track the statistical dependence between agents, objects, and outcomes. They can use minimal data to make attributions. And they help them choose between seeking help from others or exploring on their own. Clear? OK. I've just shown you kids' sensitivity to the data that they see. But of course, data isn't handed out all the time. At least disambiguating data isn't handed out. One of the really important, hard things that you have to do if you want to learn is sometimes actually figure out what data would be informative and go get it. And that is a really characteristic thing about science. And the question is, is it, in some sense, a characteristic thing about common sense. So that's what we're going to ask here. We're going to jump to much older children here. These are four and five-year-olds. And we're going to give them a problem where instead of showing them the disambiguating data, we're going to ask if the kids themselves will find it. So what we showed children-- when you were little, you possibly played with some beads that snapped together and pulled apart. These are like toddler toys. So we gave them these snap together beads. They're each uniquely colored. We place each bead, one at a time, on a toy. And in one condition, every bead makes the toy play music to each bead you put on. And the other condition, only half the beads did. So the only difference between these two conditions is basically the base rate of the candidate causes. One works for every bead, the other only works for some of the beads. And then we took all these training toys away, and we showed the children either a pair that we had epoxied together-- it was stuck. We tried to pull it apart, we showed we couldn't. The children tried to pull it apart, they couldn't. It's a stuck pair of beads, or it's an ordinary, separable pair of beads. And then, as a pair, we placed each pair, one at a time, on the toy, and the toy played music. In principle, this evidence is always confounded. One bead in each pair might be the responsible party activating the toy. But as a practical matter, if you just learn the base rate is that every single bead activates this toy, there's not a lot of information to be gained here. You should just assume that all of these beads work. And in that condition, we expected that kids would play indiscriminately with the two toys. But in the condition where only some of the beads work, there's genuine uncertainty. Right? Maybe only one of these beads work. Maybe they both did. And if that's true, and if kids are sensitive to the possibility of information gain, only one of these beads affords the possibility of finding out. On only one can you isolate the variables. And that's with the separable pair. So we thought on this condition, the kids should selectively play with the separable pair. And in particular, they should place each bead, one at a time, on the toy. So that's, in fact, what we find. In the obvious condition, the kids basically never separated the pair. And in the some beads condition, about half the kids did it and performed the exhaustive intervention. That was cool, but my graduate student at the time said, they're doing something really interesting even with the stuck pair of beads. We should look at the stuck pair. And I said, what can they do with the stuck pair? There's nothing to be done with the stuck pair. It's stuck. And she said, well, let's just try it again with the stuck pair. So we did the same thing. They got introduced the fact that either every bead worked, or only some of the beads worked. And this time we introduced just the stuck pair, and we placed it on the toy, and the toy made music. And let me show you what the children did. [VIDEO PLAYBACK] - All right, I'm going to do this one, and then it'll be your turn a little later. But now, can you just watch and see watch happens? All right. This one makes the machine go. How about this one? This one doesn't make the machine go. What about this one? This one doesn't make the machine go. Let's try this one. This one makes the machine go. LAURA SCHULZ: She goes over that a second time, and then she hands the child the toy. - Just a minute. - Look at that. [END PLAYBACK] LAURA SCHULZ: The child plays around, does just what we did. And then she does something we'd never done. She rotates the position of the bead so that only one makes contact with the toy at a time. And if you have a folk theory of contact causality, that is a pretty good way to isolate your variables. And not one that had occurred to, say, the PI on this investigation. But in fact, it occurred to about half the kids again, in that condition. In the some beads condition, where there was uncertainty, the kids were more likely to design their own intervention to try to isolate the variables than in the condition where all the beads worked. So preschoolers are using information about the base rate of candidate causes to distinguish the ambiguity of the evidence, and they're selecting and designing potentially informative interventions to isolate these causal variables. All right. I'm going to show you some new work now, kind of on the same theme. One way that investigations can be uninformative is because evidence is confounded. We're all familiar with that, right? We think we did the perfect experiment, then we're like, oh, well, it really could have been because of this really silly, boring reason. And that's disappointing. And we have to run it again. But another reason that investigations can be uninformative is because they generate outcomes that are super hard to distinguish. So if I have a handkerchief in one pocket, and a candy cane in the other, then a child who wants that candy cane is going to have no trouble patting you down and finding the candy cane. But if I have a pen in one pocket and a candy cane in the other, that's going to be a harder problem. And this might be more salient to you if I say, you're going to go in and have a lab test for a fatal disease, or potentially a benign disease. And you know what the results are going to look like? One is going to be reddish maroon, and the other is going to be maroonish red. OK, that's not the kind of test you want. You want yellow, blue, right? So this is important. We care about how much uncertainty there is over interpreting the outcome as well. So if children are sensitive to how useful actions are for information gain, then they should prefer interventions that generate distinctive patterns of evidence. So Max Siegel in my lab has been running some experiments like this. He started with a very simple one, basically the equivalent of the handkerchief and the candy cane. He said, OK, there's either a bean bag in this box that I'm going to put in here, or a pencil in this box. It is a shiny, cool, hologram, sparkly pencil. You'll want it. That's going to go in this box over here. And in this box, either the really cool, shiny hologram pencil, or the really boring yellow pencil is going to go in this box. And guess what I'm going to do? I'm going to take each box, and I'm going to shake it. So he does that. And you know what you hear both times? Ka-thunk, ka-thunk, ka-thunk. Ka-thunk, ka-thunk, ka-thunk. Indistinguishable sounds. And now the question is, which box do you want to open? Which box you want to open? Right? And if you're sensitive to the ambiguity, you should say, well, listen, if it were a beanbag, I would really know, so that must be the sparkly pencil, right? But I'm never going to know in this box, because both pencils are going to sound alike, so I really better choose this box. And then he's going to do a harder problem. He's going to say, there are eight shiny, colorful marbles-- you really want them-- in this box, or two really boring white ones. Or there are eight colorful, shiny marbles in this box, or six really boring white ones. In each case, they each get hidden, you're going to hear the box. In each case, it's going to make exactly the same sound, which is actually the sound of eight marbles in a box. And the question is, which box do you want to open? So let me show you how that works. [VIDEO PLAYBACK] - [INAUDIBLE] and I also have some marbles. Well, you see these marbles right here, the white ones? These are Bunny's. Oh, six of my marbles, yay. Oh, two of my marbles, yay. Guess what, Taylor? These marbles with lots of different colors, those are yours for right now. That's pretty cool, right? Those are your marbles. That's awesome. And in this game, I'm going to hide either your marbles or Bunny's marbles inside of this box. And then I'm going to hide either your or Bunny's marbles inside of this box. Does that sound like fun? And then we're going to look for your marbles, OK? If you find them, you get another sticker. All right, so I'm going to put Bunny's right here, and we're going to do the hide game. So first, I'm going to choose either your marbles or Bunny's marbles and put them in here. I'm going to pour them in. Look, somebody's marbles are in here. Now I'm going to do the same thing with this box. Either your marbles or Bunny's marbles are going to go in this box. All right. Are you ready to begin shaking and listening? So remember, over here, there's either your marbles or Bunny's marbles. OK, let's listen. [CLUNKING NOISES] All right. And over here, there's either your marbles or Bunny's marbles. Let's listen. [CLUNKING NOISES] Cool. Let's do it one more time. [END PLAYBACK] LAURA SCHULZ: We'll skip the one more time, but-- oops. Sorry. You can at the general principle. The children are overwhelmingly good at this kind of task, it turns out, in both of these cases and in many, many other iterations they went. They're very confident about which box they should pick, which means they're representing to themselves something about the ambiguity of the evidence, and their own ability to perceive these kinds of distinctions. So with Max, we've been talking about this as a kind of intuitive psychophysics, where they can represent their own discrimination threshold to make these kinds of distinctions. And they prefer interventions that generate distinctive patterns of evidence and maximize the possibility of information gain. Is that clear? OK. So there's a lot of ways in which children seem to be using intuitive theories, some kind of abstract, higher order of knowledge, to make inferences from data. But information is also costly to generate. And the costs themselves are informative. So I'm going to talk a little bit about that piece now. It's costly both for the learner, who cannot learn everything. And in a cultural context, where you're not just learning and exploring on your own, but you're actually getting information from other people, it's costly also for the teacher or for the informant. And these kinds of costs, and how we negotiate these kinds of costs, I think, are really fundamental to a lot of hard problems in communication and language. Lots of the field of pragmatics deals with problems of underdetermination. We say these sentences, we understand each other. We understand each other in all kinds of ambiguous contexts. We use a lot of social cues and other information to disambiguate. But part of what we do is, we make inferences about how much this person is going to communicate in this context, given how much I need to understand. And we use this to resolve these kinds of ambiguities. I'm going to give you a few examples of that here. Now, again, I'm going to start with the study we did a while ago then show you a little bit more recent work. I'm going to skip over some of this just to be able to cover all of this and say, Because there's a cost of information for both teachers and learners, it predicts some trade-offs in the kinds of inferences you should make. So for instance, if a knowledgeable informant shows you a toy and says, here, this toy has a single function. Then if you, the learner, think that that teacher is trying to generate a true sample from the hypothesis based on what actually is going to get the right idea to your head, you should assume that there is only one function and not two, or three, or four, or five. Because if there were more, they should have shown them to you, right? Because if they know the true hypothesis, and they could just demonstrate that, they can rule out all of that for you. But if you just stumble upon a single function of a toy, or a not knowledgeable teacher generates it accidentally, or if the teacher, as Liz Spelke pointed out, is interrupted in the middle of that demonstration, then you shouldn't assume that that evidence is exhaustive. It suspends that inference, right? Now, OK, well, you showed me one, but maybe there are two, three, or four. So it's only in a condition where I think you are a fully informed, freely acting teacher that I should assume, well, look, if you are helpful, knowledgeable teacher, then the information you give me should not only be true of the hypothesis, it should help me distinguish that hypothesis from available alternatives. And what that means is, there's a specific trade-off between instruction and exploration. Because if I'm instructed that there's one function of the toy, I don't need to explore any further. But if I just happen to find one function of the toy, maybe I do. So let me show you what we did to test this. We had a novel toy. It actually had four interesting properties, a squeaker, a light, a mirror, and music. And we demonstrated a single function of the toy, the squeaker, in three conditions. And we also had a baseline. In the pedagogical condition, we said, watch this, I'm going to show you-- sorry, the alignment's off-- but watch this, I'm going to show you my toy. They pulled the toy and then said, wow, see that. OK, the accidental condition was, look at this neat toy I found here, accidentally pulled this tube in the same way, wow see that. The baseline was, just look at this neat toy I have here, with no demonstration. And the interrupted condition was identical to the pedagogical condition, except the teacher was interrupted immediately after pulling the tube, and then she said, wow, see that. So is that clear? I'm sorry for the slide misalignment. And the prediction is that in the first condition, children should constrain their exploration relative to all the other conditions. So let me show you what that looks like. Or not. Which is too bad, because this is a really super cute slide. But it's not going to work. In this condition, what we found was this. We found a child in the children's' museum with a toy with all of these kind of wow properties. We say, wow, see this. We show them the property of the toy, and the child spends 90 seconds pulling only the squeaker. He then says, I'm very smart for a five-year-old. And when she asks for all of the other functions of the toy, he doesn't know any of the other functions, because he hasn't explored. And what we think is, he is very smart for a five-year-old. Because it's a completely rational inference that, if there were more functions of the toy, then they should have been demonstrated. And so what we find overall is, in fact, that children do fewer actions, and they discover fewer functions of this toy. This isn't just true, it turns out, we now know, because we live in a hyperpedagogical culture. Because Laura Shneidman and Amanda Woodward just replicated this study with Yucatec Mayan toddlers and found the same kind of effect, constraints in the pedagogical condition, even though it's a culture that's pretty limited in their pedagogy. So information is costly, and pedagogical contexts strengthen the inference that the absence of evidence-- a teacher's failure to go on and teach you more information-- is, in fact, evidence of its absence. Is that clear? And this is a very sensible inductive bias, but it predicts that instruction will, for better or worse, constrain expression. Because that's what it's supposed to do. It's supposed to constrain the hypotheses you consider. And indeed, it works quite well. And that's good if you're right about the world, right? And it constrains it to efficient learning. But it's bad if you're wrong about the world. Because the unknown unknowns, the things you don't know are true that you failed to teach, are going to potentially mislead a learner. How much is enough information? Well, there are lots of good reasons why teachers ought to provide very limited information. First of all, as I showed you in the first set of studies, evidence often supports generalization. Right? One dog toy squeaks, probably they all squeak, barring how generalizable that sample is. So I don't need to show you every single toy. I don't need to show a child, this is a cup, and that's a cup, and this a cup too, and that's a cup, and that's a cup. Once the child has a cup, I can assume that that child herself will be able to make the rational generalization. Or sometimes I know you're not going to be able to make it, but the additional information is just too costly. I'm working on teaching you two plus two, I'm not going to teach you linear algebra right now. It's a waste of our time. So that's another reason why you might provide limited information. So what are the contexts in which omitting information is a reasonable thing to do, and when is it misleading? When is this a real problem? And the answer turns out to be, if I'm the informant, and I know I'm providing information that is going to lead you to the wrong hypothesis, then we consider that a sin of omission. Right? If I'm omitting information, and I'm not doing that, then maybe that's not a problem. So one of the questions is, can children distinguish these contexts? Can they tell when the teacher is providing too little information, and it is going to cost the learner something in terms of what they can gain, and when they're not. So to test this-- this is again, Hyowon Gweon's work-- we introduced a toy. And it had one function, this wind up mechanism. And the kids got to explore, and they found out the toy did one thing. In the other condition, the toy looked the same. But in fact, the toy had lots of functions, and the children knew that. So the children always knew the ground truth. The toy either had one function or four. And then there was a teacher who taught Elmo. The teacher always did the same thing. The teacher always taught just one function. And the first question was, the teacher's always doing the same thing with an identical looking toy. Do the kids penalize the teacher? Do they think he's a bad teacher if he only teaches one function when there are really four, compared to when he teaches one function and there's only one. So the first thing we did was, ask kids to rate that teacher. And indeed, they think that this teacher is a terrible teacher when there's four functions and he only teaches one. They think he's a good teacher when he teaches one of one. But the really interesting question was, what would the children do to compensate if they knew they had a bad teacher? So we ran exactly the same set up where the teacher shows Elmo the toy in the one function case and the four function case. And for reasons that will become clear, we also ran a control condition where there were four active functions, and the teacher taught all four. In all cases, the teacher then goes on and runs that experiment I just showed you. The teacher shows just the squeaker toy here of this single function. The question is, what should the kids do? So it's a complicated set up, so I'll walk you through it a little bit. When you teach one of one function, you should infer the toy probably does one thing, it's a good teacher. And so when they show you one function of this, you should constrain your exploration, say that was a sensible inference. When they teach one of four, you should say, that's a bad teacher. The toy probably does more than one thing, the new toy probably does too. I'm going to explore more broadly. But we don't know, in that case, if they're doing it because they just saw a toy with one function, and so they think this toy has one function. And they just saw a toy with four functions, so they think this toy has four functions. So we can disambiguate those with this condition. Now if they're just generalizing from the toy, this toy has four functions, well, then they should think this toy has four functions. But if they're generalizing from the teacher, this is a good teacher. So when the teacher now tells you about the new toy, that it has one function, the kids should constrain their exploration. So does everyone understand the logic of the design here? And in fact, that's exactly what we find. The children compensated with additional exploration when they thought the teacher had provided insufficient information to the learner. Is that clear? OK. So information is costly to teachers and learners. If teachers minimize their own costs and provide too little information, children think they're poor teachers. They suspend the inference that that information is representative. They compensate with additional exploration. I'm going to go ahead and show you just a couple more examples here. There's too little information. But because information is costly, you can also provide too much information. I might be doing that right now. Too much information is costly. It's taking a toll on the learner to absorb all of that. And you have to know, well, are you providing me too much information, or just the right amount? And at the risk of falling into this trap myself, I am going to show you quickly this study. Because how much information is too much information depends on a hard question which is, how much do you already know? Right? You've all been here all summer. You know a lot of things. I'm not a very good estimate of what you know or how much this information is going, so it's a little hard for me titrate what's the right amount of information to give you. And the question is, can children take these kinds of theory of mind problems into account in order to estimate what information they should be getting? To test this, we give kids a 20-button toy. If I push a single button and it makes music, how many of you think that all the other buttons make music? Because you can generalize from data, and that is a really good inductive inference there. They look the same, it's a toy. One makes music, they probably all do. But suppose I go on now to show you-- so that's your prior expectation, they all work. But that one doesn't work, and that one doesn't work, and that one doesn't work, and that one doesn't work, and that one doesn't work, and that-- oh, that one works. And that one doesn't work, and that one doesn't work, and that one doesn't work, and that one doesn't work. I'm doing this for a reason. I know this is really boring. It's partly to give you a break, but it is also-- oh, that one works. OK, so now what you have learned about this toy is that actually, only three of these buttons work. And suppose I show you this across a couple of toys. You've just changed your expectation. Now if I show you that one button works, you don't think all the rest work. You think probably two others work, right? So if I bring out a brand new toy, and I push this button, you'll probably be relieved if I just go ahead and push these three, right? And I don't go around and show you all of the inert buttons on the toy. Because information is costly. You have to sit there through all those demonstrations. In this experiment, I'll show you Gweon's work. We gave kids a common ground condition where everybody shared prior knowledge, this abstract theory you can use to constrain your interpretation of this data. In this condition, there were two toy makers, who are the informants, and there's two naive learners, Ernie and Bert. And in the common ground condition, Ernie, and Bert, and the toy makers are all there while the child explores and finds out that only three buttons work on these toys. OK. And then one teacher shows of a brand new toy just like this every single button, the inert ones and the non-inert ones. And the other teacher shows just the three working buttons. Right? And then we say, hey, kids, guess what? We have a whole closet full of more of those toys. One of these teachers can show them to you. Which one do you want? Which one do you want, OK? The other condition is almost the same, but guess what? The child explores on their own, right? And so there's no common ground about what these toys do. And then the teachers do the same thing. One teacher shows exhaustive information, and the other teacher pushes only the three working buttons. So in this case, that efficient information, that less costly simple demonstration could mislead the learners about what the true hypothesis is. They have a prior that all these buttons ought to work. Which toy maker would you rather learn from depends on whether the learners share that prior background information or not. And that was true not only when the children were judging the informants, but when they were teaching themselves. So again, these are four and five-year-olds. The children had a condition where Elmo got to see that only three buttons worked on the toy, and the condition where Elmo didn't get to see how many of those buttons worked on the toy. And then the children got to teach Elmo the toy. And the children were much more likely to press more buttons and provide exhaustive evidence in the no common ground condition than the common ground condition. So children themselves are adjusting the cost of the information they provide based on their prior expectation about what they think the learner is going to learn from the data. What did we do is, lastly, tell you a little bit about how the costs of information are informative not just in figuring out what data to learn from and how you should communicate information, but in actually figuring out what people are doing, and actually grounding out ordinary, everyday theory of mind. I'm going to start with an example you're familiar with. I know you've seen this before. This is an experiment by Gergely and Csibra, a rational action. This little ball jumps over the wall to get to the momma ball-- you've all seen this? I think Josh was presenting it maybe? OK. And when you took the wall away, babies expect that ball to take an efficient route. So we think that rational agents should take the most efficient route to the goal, they should maximize their overall utility. But there's a lot of reasons why that ball might have jumped over the wall, and they all have to do with costs of rewards of action. One might be, it was really hard to get over the wall, but really rewarding to do so. Another reason, though, is that it was really easy to get over the wall, so she might as well, but she didn't care that much about the reward. These could have the same net utility, but psychologically, you really care about the difference. If we're talking about the internal structure of an agent's motivations, you want to decompose this simple argument about rational, goal-directed action into what the particular costs and rewards are. So Julian Jara-Ettinger in our lab has developed this account he calls the naive utility calculus, which is our way of reasoning about other peoples' actions. Which is, we assume other agents are acting to maximize utility, but we care about the internal structure also. There are agent invariant rewards and costs. Two cookies is always more than one cookie. Higher hills are always higher than lower hills for all of us. But some of us are more motivated to get cookies than others of us, and some of us find hills more costly than others of us. So in addition to these agents invariant aspects of costs and rewards, there's also these internal subjective things that are harder to judge, right? How competent you are, what your values are, and your preferences. And so understanding how all of these worked together lets you take a very, very simple analysis and make surprisingly powerful inferences about what other agents are doing. I'm going to show you a few examples and actually connect it back to how children are scientists in this regard. So here is an example experiment. Here is Grover, and there is a cracker and a cookie down on this low shelf. And Grover goes ahead, and he chooses the cookie. And now there's a cracker and a cookie and this shelf, and Grover goes ahead and chooses the cracker. And the question is, what does Grover like better? Well, if your read out of preferences, it's your actions you take. It's your goal-directed action. You should be at chance-- you chose a cracker once and a cookie once. But if even young children understand, no, it's not that simple, right? You're not just acting to maximize reward, you're acting to maximize utility. You have to take the costs into account. Then clearly his preference is what he chose when the costs were matched, right? Not when the costs were mismatched. And the children should say, no, no, no, it's what he chose on the low box. Which treat does he like best? Is that clear? And indeed, that is what children do. You can also introduce a couple of characters. Cookie Monster, who has a strong preference for cookies, and Grover, who is indifferent, who likes them both. So for Cookie Monster, the reward value of cookies is much higher. For Grover, they're equivalent. And now you can set up a situation where there's crackers in a low box and cookies at a high box. And you say, guys, go on, you can make a choice. And Grover chooses the cracker and Cookie Monster chooses the cracker. And you say to the kids, which puppet can't climb? Now, no puppet has climbed. No puppet has failed to climb. No puppet has even thought about climbing. But the kids can know the answer, right? Because if Cookie Monster could climb, and he had a high reward, then who would do it. So you don't really know about Grover, but you can make an inference that the costs were not equivalent. And by the way, in case you're worried that kids are just saying, well, Cookie Monster, I've been listening to Michelle Obama. You know, obesity and fitness-- cookies aren't good, I can't climb. We ran the same experiment with Grover and clover, and Grover really likes clovers, and Cookie Monster likes them both. And you now flip the inference around. OK, so they can consider how the costs affect the inferences they make. All right, so let's bring this all back together. I've thrown a lot of information-- probably too much information-- at you about how kids can reason from sparse data, about how they use their theories to make these inferences, about how they use this in teaching and learning in social context. But if kids are sensitive to these utilities and the trade-offs, then the kinds of things that I showed you them doing with beads and with machines, they should also be able to do in social context. We believe psychology to be something of a science, as well as all of these other sciences, and they should be able to apply some of the same principles-- holding some things constant, manipulating others-- in order to gain information. So we basically ask that question of the children here. Can they distinguish agents' different competencies and rewards by manipulating the contexts that they see and gaining information? So here, we don't know if Cookie Monster can climb. So let's put one treat on each box. Where should we put the treats to find out if Cookie Monster can climb? Now, only one of these interventions is informative. If the cookie is down low, and you know Cookie Monster prefers cookies, then you're not going to get any information. But if the cookie is up high, then you are going to get information, right? And in fact, the kids are overwhelmingly good at this. In case you think, well, treats should be put up high-- and this is also true, by the way, for clovers, but it's all right. In case that you think they just have a heuristic like, oh, well, let's put treats up high, you can ask the question a different way. You can say, both of our friends can climb the short box. But only one of our friends can climb the tall box, and we don't know which one. So let's put the cookie up here and the cracker down here. And if we want to figure out which one of our friends can climb, which friend should we send in? Well, Grover has no particular incentive to climb. He could just do the cracker. But Cookie Monster, he has an incentive to climb. You should probably send in Cookie Monster. And again, these are the kinds of inferences that young children can make. Is that clear? All right. So, end of a long winded talk. I want to return it to this. So I framed it in terms of the way I've been increasingly thinking, and the projects that we're increasingly moving towards, which is thinking not just about the pure pursuit of knowledge and information, but how do you pursue information in a complex world where it's not just your own individual exploration. You get it in a social context, you get in interaction with others. The information is costly both to deliver and to process. But that, those costs themselves, are information, and you can use them to make sense of the world. And so I want to sort of bring these together and come back to a problem that I posed a bit earlier, which is, I think, I hope I've made the case that what kids are doing is not reasoning about huge sets of data. What kids are doing is, they're taking some very abstract structure knowledge and using it to constrain their inferences about tiny amounts of data. Cookie Monster this, a couple of trials of evidence. And then they make good inductive guesses, which are sometimes wrong, but they are good. And I said, these abstract representations, not all of them are innate, right? You've seen beautiful evidence of the many that are, but a lot of them aren't. A lot of them aren't. A lot of the things that govern your common sense knowledge every day, how do you get those? And how do you get those from tiny amounts of data? This is a problem that bugged me deeply for a very long time, and I think there's been a real leap and a very exciting ascent of how it is actually possible to use tiny amounts of data to make really rich abstract inferences, which then constrain your interpretation of subsequent data. I think that is a really important problem. And with that, I'm going to turn it over to Josh and Tomer, who maybe can tell you how that is going to actually work. So thanks to everyone here at Woods Hole. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Nick_Cheney_Capturing_Neural_Plasticity_in_Deep_Networks.txt | NICK CHENEY: I'm Nick Cheney. I'm finishing my PhD in Computational Biology at Cornell University. Gabriel Kreiman and I are interested in seeing how deep networks respond to neuroplasticity. So in the brain, we know that the brain is constantly in flux. And neurons are growing and dying. Weights are changing in response to stimuli. But most of the time, machine learning, what we do is pre-train a network on some training set. But then we want to use it for real-- we freeze it and keep it in some static form. There's been a lot more emphasis lately on more online learning so that you learn as you're working on a data set. And in those kind of environments, we think that the network will be changing quite a bit. So we're looking at how that network could be robust to those kind of changes much like the brain is to every day stimuli and actions it sees. To start out just doing a very simple test looking at perturbations of the network. So just throwing random changes at the weights that make up this network and seeing how that affects its ability to classify images. After that, we're looking at how different parts of the network respond differently to these kind of changes. And then, ideally, we'd like to have some kind of rule that doesn't affect the performance of the network that much and it's able to maintain its ability to classify throughout seeing a number of stimuli. So we know that the brain has certain learning rules like Hebb's rule, in which neurons that fire one after another end up strengthening their connections or conversely weakening their reactions. Then we're soon going to see if rules like that end up providing stable perturbations where the network can easily recover and maintain what it's doing or unstable ones where we're going to go down on track. Or we know that deep networks act similar to how the brain works. Jim De Carlo gave a great talk about how the features we see in deep networks are similar to the features of the brain. And we know that the brain is constantly undergoing these kind of changes. So we're curious scientifically to see how these computer models respond in understanding how these two systems are the same or different. But also from an engineering context online learning where the network is changing while it's learning is going to be, I think, a much larger part of the use of machine learning going forward. So understanding how stable these things are to constantly changing parameters, I think, will be quite informative for those kind of studies. Being able to explore new types of materials and learn a lot about both computer vision and neuroscience has been a lot of fun. And, certainly, informative deep learning is a very hot topic right now. So being able to dive hands in a little bit and get some experience working with these models and some of the latest software packages, I think, will be useful going forwards too. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_85_Giorgio_Metta_Introduction_to_the_iCub_Robot.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. GIORGIO METTA: So I'll be talking about my work for the past 11 years. So this has been, certainly, exciting, but also was long in duration, so we had to sort of stick to the goal. And I'll show you also a couple of things-- I mean, most of this work has been possible because we have a team of people that contributed to both the design of the robot and the research we're doing on the robot, so I'll be freely drawing from the work of these other people. I just cited them as the iCub team, because I couldn't list everybody there, but you'll see a picture later that shows how many people were actually involved in developing this robot. So our, let's say, goal, although we didn't start it like this, is to build robots that can interact with people, and maybe one day be commercially-available to be deployed in the household. Everything we've done is-- on the design of the robot has to do with a platform capable of interacting with people in a natural way. And this is reflected in the shape of the robot, that it's humanoid. It's reflected in the type of skills we tried to implement in the robot. And overall on the design, the platform excels in terms of strength, in terms of sensors, and so forth. There was an, let's say, hidden reason. We wanted to design a platform for research, also, so when we started, we didn't think of a specific application. Our idea was to have a robot as complicated as possible to give researchers the possibilities of doing whatever they liked. So the robot can walk, has cameras, tactile sensors. It can manipulate objects. We put a lot of effort into the design of the hands. And it's complicated. And it breaks often, so it's not necessarily the best platform, but it's the-- I believe, the only platform that can provide you with mobile manipulation, and at the same time with a sophisticated oculo motor system in the eyes and cameras. And maybe it doesn't give you lasers, so you have to do with their vision. The result is this platform that's shown here. This started as a European project, so there was an initial funding that allowed for basically hiring people to design the mechanics and electronics of the robot. And unfortunately, the robot is not very cheap. I mean, the overall-- we tried to put the best components everywhere. And this is reflected in the cost, which doesn't help diffusion, to a certain extent. In spite of this, we managed to, let's say, "sell," between quotes, because we don't make any profit out of it, 30 copies of the robot. There are still two of them to be delivered this year, so there are, at the moment, 28 around there. And four of them are in our lab, and are used daily by our researchers. And given the complexity of the platform, we managed, at best, to build four robots per year. And at best means that we're always late in constructions. We're always late in fixing the robots. And that's because, I mean, we have a research lab trying also to do-- to have this, let's say, more commercial side or support side to the community of users, which, in fact, doesn't work. I mean, you cannot ask your PhD students to go and fix a robot somewhere in the world. It was striking a bit that we managed to actually sell the robot in Japan. And that's because, you know, you see Japan as the place of humanoid robots. And having somebody asking a copy of our robot there was a bit strange. But nonetheless, the project is completely open-source. If you go to our website, you can download all the CAD files for the mechanics, for the electronics, all the schematics, and the entire software, from the lowest possible level up to whatever latest research has been developed by our students. Why we think the robot is special? As I said, we wanted to have hands. And we put considerable effort into the design of the hands. There are nine motors driving each hand. And-- although, there are five fingers and 19 joints, which means some of the joints are coupled, so the actual dexterity of the hand is all to be demonstrated, but it works to a certain extent. There are some sensors. It's entirely human-like. We don't have, for instance, let's say, we don't have lasers. We don't have ultrasound or other fancy sensors that, from engineering standpoint, could also be integrated. But we decided to stick to certain subset of possible sensors. There's one thing that I think is quite unique. We managed along the way to run a project to design tactile sensors. And so I think it's one of the few robots that has almost complete body coverage with tactile sensors. There are about 4,000 sensing points in the latest version. And we hope to be able to use them. I mean, you'll see certain things that we started developing. But for instance, we-- there was discussion about manipulation and the availability of tactile sensors. We just scratched the surface in that direction. We haven't been able to do much more than that. As I said, we designed, also, the electronics. And the reason for doing this was that wanted to be able to program the very low-level of the controllers of the robot. This didn't pay off for many years, but at a certain point, we started doing torque control. And we started hacking also the low-level controllers of the brushless motors. And so it paid off eventually, because that wouldn't have been possible without the ability to write low-level software. Not that many people are modifying that part of the software. It's open-source, also, that part, but it's very easy to burn your amplifiers if you don't do the right thing at that level. And the other thing is that, as I said, the platform is reproducible. And at the moment there is GitHub repository-- well, a number of GitHub repositories which contain, whatever, it's some, a few millions of lines of code, whatever it means. It just means probably that a lot of students are just committed to the repositories, not necessarily that the software is super high-quality at this point. There are a few modules that are well-maintained. And that's the low-level interfaces, which is something we do. Everything else can be in different ranges of readiness to be used and things. Well, why humanoids? There were, at least at the beginning, scientific reasons. One, paraphrasing Rod Brook's paper, Elephant's Don't Play Chess, the reason of developing intelligence in a robot that has a human shape may give an intelligence that is also comparable to humans, but also provides for natural human-robot interaction. The fact the robot can move the eyes is very important, for instance, has a very simple face, but it's effective in communicating something to the people the robot is interacting with. And also, building a humanoid of a small size-- the robot is only a meter tall-- was very challenging from the mechatronics point of view. So for us, engineers, was a lot of fun too-- the initial few years when we were designing, every day was very, very funny, our-- a lot of satisfaction seeing that the robot was growing and being built, eventually. The fact that the platform is open-source I think is also important, allows for repeating experiments across different-- in different locations. So we can develop a piece of software and run exactly the same module somewhere else across the world. And this may, again, give advantages in-- first of all, debugging was a lot easier, so many people complaining when we do-- when we did something wrong, and allowed for also, let's say, shared development, so building partnerships with many people, mostly across Europe, because there was funding available, so for people to work together. And this may eventually enable better benchmarking and better quality of what we do. As part of the project, we also develop middleware. So maybe you may think that we have been a bit crazy. We went from the mechanical design to the research on the robot, and passing through the software development, but actually, this was a middleware that was started before ROS even existed. And in fact, it was a piece of my work at MIT with a couple of the students there in 2001, 2002. So the first version actually ran on COG and run on QNX, a real-time operating system. Later we did a major porting to Linux, and Windows, and MacOS, which-- so we never committed to a single version. And that because we had this community of developers from the very beginning, and there was no agreement on what development tool to use, and so we say, why don't we cover almost everything. And this part of the software is actually very solid at the moment. This has been, you know, growing, not in size, but in quality, in this case, so the interfaces remain practically the same. And I think the low-level byte coding of the messages passing across the network didn't change since the COG time. Everything else changed. It is completely new implementation now. But it has portability, so as I say, this was a sort of requirement from the researchers not to commit to anything, and so we have developers using Visual Studio on Windows or maybe using GCC on Windows, and other developers running whatever IDE available on Linux or MacOS. And this worked pretty well. And there's also language portability. We can link-- so all this middleware is just a set of libraries, so we can link the libraries against any language. And so we have bindings for whatever, Java, Perl, MATLAB, and a bunch of other languages. And this helped researchers also to do some rapid prototyping maybe using Python and so forth. As I said, the project is open-source, so you will find, if you go to the website, there's a manual, not particularly well taken care of. It works. At least, it works with our students, so it should work for everybody. But it also, the drawings-- so you can go with drawing like those to mechanical workshop. And you get the parts in return. And then from those, you can also figure out how to assemble the components. Although it's not super-easy. It's not something you do, just because you have the drawings, you do in your basement. I mean, one of the groups in one of our projects tried doing that. And I think they stopped after building part of a arm and maybe part of a leg. I mean, it was very challenging for them. And you need a very, let's say, a proper workshop for building the components, so it takes time, anyway. Continuing on the sensors, I mentioned that we have skin. And I'll show you a bit more about that in a moment. But we also have force-torque sensors, and gyroscopes, and accelerometers. So if you take all these pieces and you put them together, you can actually sense interaction forces with the environment. And if you can sense interaction forces, you can make the robot compliant. And this has been an important development across the past few years that allowed the robot to move from position control to torque control. And this has been needed, again, to go in the direction of human-robot interaction. And so these are standard force-torque sensors, although we designed, as usual. We spent some time and designed the sensors. And this was a reason of cost. The equivalent six-axial force-torque sensor, commercially, cost, I don't know, $5,000. And we managed to build it for $1,000. So it maybe is not as super rock-solid as the commercial component, but it works well. And about the skin, this was a sensing modality that wasn't available. And again, we managed to get funding for actually running a project for three years to design the skin for the robot. And we thought it was a trivial problem, because at the beginning of the project, we already had the idea of using capacity sensing. And we actually had a prototype. And we say, oh, it's trivial. Then we spent three years to actually engineer it to make it work properly on the robot. So the idea is trivial, so since capacity sensing is available for cellphones, we thought of moving that into a version that would work for the robot. There were two issues. First of all, the robot is not flat, so we can't just stick cell phones on the robot body to obtain tactile sensing. So we had to make everything flexible so they can be conformed to the surface of the robot. The other thing is that the cell phones only sense objects that are electrically-conductive. That's because the way the sensor is designed, so we had to change that, because the robot might be hitting objects that are not-- that are plastic, for instance. So what we've done was to actually build the capacitors over two layers. There's an outer layer and a set of sensors that are etched on a flexible PCB that is shown there. And what the sensor measures is actually the deflection of the outer layer, which is conductive, towards the sensors. And in between, we have another flexible material. And that's another part of the reason why it took so long. We started with materials like silicone that were very nice, but unfortunately, they degrade very quickly, so we ended up running sensors for a couple of months. And then all of sudden they started failing or changing their measurement properties. We didn't know why. We started investigating all possible materials until we found one that was actually working well. The other solution we had to, basically, design was the shape of the flexible PCB. So we had the challenge of taking 4,000 sensors and bringing all the signals somewhere to the main CPU inside the robot. And, of course, you cannot just connect 4,000 wires. So what we've done on the back side of the PCB there's actually a routing for all the sensors from one triangle to the next until you get to a digitizing unit. And-- sorry, each triangle digitize its own signals. And they travel in digital form from one triangle to the next until they reach a micro-controller that takes all these numbers and sends them to the main CPU. And this saves on the connection side, and so it actually enables the installation of the skin on the robot. So this is a, let's say, industrialized version of the skin. And that's the customization we've done for a variant arm. And those are parts of the skin for the iCub. So the components that we just screw onto the outer body and to make the iCub sensitive. This is another solution, which is, again, capacitive for the fingertips, simply because the triangle was too big, too large for the size of the iCub fingertips, but the principle is exactly the same. It was just more difficult to design these flexible materials, because they are just more complicated to fabricate on those small sizes. And the result, when you combine the force-torque sensors and the tactile sensors is something like this, which is a compliant controller on the iCub, where you can just push the robot around. This is in zero-gravity modality. So you can just push the robot around and move it freely. And this has to be compared to the complete stiffness in case you do position control. And another thing that is enabled by force control is teaching and demonstration. This is a trivial experiment. We just recorded trajectory and repeated exactly the same trajectory, so it's not-- I mean, you can do learning on top of that, but we haven't done it. It's just to show that the fact that you can control-- the robot in torque mode enables these type of tasks, so teaching a new trajectory that was never seen by the robot. There's another less trivial thing you can do. Since we can sense external forces, you can do something like this, which is, we can build a controller where you keep the robot compliant. You impose certain constraints on the center of mass and the angular momentum, and keep the robot, basically, stable in that configuration like this one, in spite of external forces being, in this case, generated by a person. This is part of a project that is basically trying to make the iCub walk, more or less, efficiently. And as part of the project, we actually also redesigned the ankles of the robot, because, initially, we didn't think of bipedal walking, and so they weren't strong enough to support the weight of the robot. And this is basically the same stuff that was shown on the previous videos, just the same combination of tactile and force-torque sensing used to estimate counter forces. We actually added two more force-torque sensors in the ankles, so we have six overall here in this version of the robot. Now, as part of this, we also played a bit with machine learning. For mapping the tactile formation and force-torque sensor information to the joints, since they are not localized on the joints of the robot, we have-- and also for separating what we measure with the sensors from the forces generated by the movement of the robot by its internal dynamics, we have to have information about the robot dynamics. And this is something we can do, or we can build a model for using machine learning, since we have measurements from the joint position velocities and accelerations, and the torques measured from the force-torque sensors, we can compute the robot dynamics. And this can be done either using a let's say, computer model from the CAD, or from learning the model via machine learning. And so we collect the data set from the iCub. In this case, it was a data set for the arm, for the first four joints. We didn't do anything for the rest. And in this case, we used-- we sort of customized a specific method, which has custom processes, to be incremental, and also to be computationally-bounded in time, so we wanted to avoid the explosion of the computational time due to the increase in the number of samples. And this was-- well, it was basically an interesting piece of work because everything we do on the robot, if it's inserted in a control loop, has to have a predictable computation time, and possibly limited enough so that we can run the control loop at reasonable rates. And this is some of the results. And actually, we also compare with other existing methods. This is just to show that the method we developed, which uses an approximate kernel, works pretty much as well as a standard Gaussian process regression in this case, and works much better than other methods from the literature. This was just to have a rough idea that this was entirely doable. Also, by shaping the kernel, it's possible to compensate for temperature drifts. Unfortunately, the force-torque sensors tend to change response due to temperature, not that the lab is changing temperature, but often, the electronics itself is heating up around the robot, so it's making the sensor read something different, and but it's possible to show that, again, through learning, you can build a compensation also for the temperature variations just by shaping the kernel to include a term that depends on time. This is one example of how we've done machine learning on the robot, although the problem is fairly simple. A problem that is more complicated is learning about objects. So in this scenario, we-- targeting is shown here, where we have, basically, a person that can speak to the robot, tell the robot that it's a new object. And the robot's acquiring images. And we hope to be able to learn about objects from-- just from these type of images. This is maybe the most difficult situation. We can also lie objects on the table and just tell the robot to look at a specific object, and so forth. Again, the speech interface is nice, because you can basically also attach labels to the objects that are what it's seeing. The methods we tried, in the recent past, we've done-- we basically applied sparse coding and then regularized least squares for classification. This was basically how we started a couple of years ago. And more recently, we used an off-the-shelf convolutional neural network. And again, the classifiers are linear classifiers. And this, I mean, has proved to work particularly well, but also, since we aren't the robot, we can, let's say, play tricks. One trick that is easy to apply, and it's very effective, is actually, you're seeing an object, but you don't have a single frame. You can actually take subsequent frames, because the robot may be observing the objects for a few moments, for seconds, whatever. And in fact, there's an improvement that is shown in this plot there, the one to the right. If you increase the number of seconds you're allowed to observe the object, you improve, also, performance. And the plot is over the number of classes, because we also like to improve on the number of classes that a robot can actually recognize, and which was limited until, let's say, a couple of years ago, but now, with all these new deep-learning stuff, it seems to be improving quite a lot, and our experiments in that direction. There's another thing that can be done, which is try to see what happens if we have-- since we have, again, the robot interacting with people for entire days, if we collect images on different days, and then we can play with different conditions on the testing case. So for instance, the different plots here show what happens if you train and test on the current day, so you train cumulatively on up to four days and you test on the last day only. And you see, of course, performance improve as you increase the train set. Conditions may be slightly different from one day to the next. Light may have changed, just because it was more a sunny day or a cloudy day. And the other conditions are to test also on past days or to test on future days, so where conditions may have changed a lot. And in fact, performance is slightly worse in that situation. OK and this is a video that shows, basically, the robot training and some of the experiment on testing how the robot perceives a number of objects. And unfortunately, there's no speech here, but this basically a person talking to the robot and telling the robot what is the name for this specific object, then putting another object there, drawing the robot's attention to the object, and then, again, telling the name. This is the Lego. It becomes faster in a moment. OK, and then you can continue training basically like that. And the video shows also testing-- was showing a bunch of objects simultaneously to the robot. And here, we simply click on one of the objects to draw the robot's attention. And on the plot there, you see the probability that a given object is being recognized as the correct one. OK, I think I have to cut this short, because I'm running out of time. Another thing I wanted to show you is, basically, now we have this ability to control the robot. We have the ability to recognize objects. We also have the ability to grasp objects. And this is something that uses stereo vision. And in this case, what we wanted to do is to present an object to the robot, no prior knowledge about the shape of the object. We take a snapshot. We reconstruct a stereo pair. We have to construct the object in 3D. And then we apply optimization, constrained optimization, to figure out a plausible location for the palm of the hand. And then that will maximize the ability to grasp the object by closing the finger around that particular position. This is our, let's say, definition of power grasp. So put the palm of the hand of the robot in a region of the object that has a surface, which has a similar shape or a similar size of the palm itself, and where the orientation is compatible with the local orientation of the surface. And this works with mixed results. So it works with certain objects. It doesn't work always. There are objects that are intrinsically more difficult for this procedure, so some of them will only be grasped with 65% probability, which is not super-satisfactory. If you run long experiments, you want to grasp three, four objects, you start seeing failures. It becomes boring to actually do the experiments. So it works well for soft objects, for instance, as expected. We moved a bit into the direction of using the tactile sensors, and-- but at this point, we've only been able to try to characterize forces out of the force of the tactile sensor measurement. So we-- basically, taking a fingertip, we have 12 sensors, and we're trying to-- and this is another case where we apply machine learning trying to reconstruct the force direction and intensity from the tactile sensor measurements. And this is basically the procedure, is we take the sensor. We move our six-axial force-torque sensor. We take the data. And we approximate this, again, with a Gaussian process. Just one last video, if I can. OK, so basically, we've put together all these skills. We may be able to do something useful with the robot. In this case, the video shows a task where the robot is cleaning a table. And it's actually using the grasp component, and the ability to move the object, to see the object, recognize them, and grasp them, and put them at a given location, which was pre-specified, in this case, so it's not recognized that this a container. It's just putting things there. And there's one last skill that I didn't have time to talk about, which is recognizing certain objects as tools, and one specific object, like this one. An object like the tool here can actually be used for pulling another object closer. And this is, again, something that can be done through learning. So we learn the size of the sticks or set the sticks, and we also learn how good they are for pulling something closer through experience, by, basically, trial and error over many trials. And the result is that you can actually generate a movement that pulls the object closer so they can later be grasped. And that's basically a couple of ideas on how to exploit the object affordances, not just recognizing them, but also knowing that certain objects have certain extra functions which may end up being useful. OK, I just wanted to acknowledge the people that are actually working on all this. I promised that I will do that. And this is actually a photo around Genoa showing the group that has been mainly working on the iCub project over-- let's say, this is the group last year, so there may be more people that just left, or some of them moved to MIT. OK, thank you. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | David_Rolnick_Ishita_Dasgupta_Modeling_Dynamic_Memory_with_Hopfield_Networks.txt | [MUSIC PLAYING] ISHITA DASGUPTA: I'm Ishita Dasgupta, I'm going into my third year of my PhD at Harvard in Computational Cognitive Science. DAVID ROLNICK: I'm David Rolnick. I'm just getting into my fourth year with my PhD at MIT. I am in the applied math department. ISHITA DASGUPTA: So we're working with Hopfield networks, which is a kind of-- it's a concept in which tiny neurons are kind of connected-- they're all connected together, and the way they update each other, basically determines the state they're going to be in. It has been used in the past to model memories. It's basically that there are certain kinds of states that the neurons prefer to be in given the way that they're all connected together. And you can make them go into these states by initializing at a different point. And so it's been used to store memories before, but these are static memories. Like, once you're in one of those memories, you just stay there. So we were working with this kind of model to make some changes to it and have it be such that you can go from one such memory to another such memory and decide what probability it is that you're going to go to one memory or to another memory, and so basically add some stochastic dynamics to a Hopfield network. DAVID ROLNICK: Well, the idea is that there are many situations where the living brain is going to be faced with the task of reconstructing or simulating a stochastic sequence of actions. So for instance, if one were simulating an event in which one didn't know quite what the probabilities were that something was going to happen, then you can imagine playing it out in your mind and imagining one way of realizing it and each state in your mental sequence would be determined by the previous state. So if something's falling, then its state when it's falling is determined by the state when it was upright. And if we can understand how we could use memory to generate these sequences of patterns that are determined by stochastic rules, then we would be able to get a better sense of what kind of imagination memory connections there are possible even in a very simple model of the brain. And we're working with sort of the simplest model of memory, but it still turns out to be extremely powerful in being able to create these patterns of stochastic sequences Markov chains. ISHITA DASGUPTA: So far, we've just been modeling it on using-- computationally modeling what we think should happen. For us, there's a bit of theory work to figure out what kind of connections we should put in there so that it should work, and then we actually set up those connections and see if it does work. And we're hoping at some point to be able to tie it back to actual real world situations in which this kind of stochastic sequence of events actually happens in the brain, but that is currently on the-- like, in the future. Right now, we're just making sure that we can model this kind of behavior in a computer. DAVID ROLNICK: In some sense, it's an engineering task or a theoretical task followed by an engineering task. Understanding what can be done in a system like this and then simply building it. We built it and now we have to-- ISHITA DASGUPTA: Yes. DAVID ROLNICK: --see how it works in practice. ISHITA DASGUPTA: It becomes kind of like an experimental science, we're just changing parameters and seeing how things change, because these are not entirely clear and predicable. You can't just say that because you built it, you should know how it works. There are too many degrees of freedom, so there are a lot of things to be tested to see how well it performs in different-- under different environments. [MUSIC PLAYING] |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Unit_7_Panel_Vision_and_Audition.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.MIT.edu. ALEX KELL: So, let's talk about the historical arcs of vision and auditory sciences. In the mid 20th century, auditory psychophysics was, like, a pretty robust and diverse field. But currently there are very few auditory faculty in psychology departments, whereas, like, vision faculty are the cornerstone of basically every psychology department in the country. In a similar vein, automated speech recognition is this big fruitful field, but there's not much kind of broader automated sound recognition. In contrast, computer vision is this huge field. So, I'm just kind of wondering historically and sociologically, how did we get here? JOSH TENENBAUM: You probably talk about this more. JOSH MCDERMOTT: Sure, yeah. I'm happy to tell you my take on this. Yes, so it's-- I think there are really interesting case studies in the history of science. If you go back to the '50s, psychoacoustics was sort of a centerpiece of psychology. And if you looked around at, like, you know, the top universities, they all had really good people studying hearing. And even some of the names that you know, like Green and Swets, you know, the guys that invented signal detection theory, pretty much. They were psychoacousticians. People often forget that. But they literally wrote the book on signal detection theory. JOSH TENENBAUM: When people talked about separating signal from the noise, they meant actual noise. JOSH MCDERMOTT: Yeah, and there was this famous psychoacoustics lab at Harvard that everybody kind of passed through. So back then, what constituted psychology was something pretty different. And it was really kind of closely related to things like signal detection theory. And it was pretty low level by the standards of today. And what happened over time is that hearing kind of gradually drifted out of psychology departments and vision became more and more prominent. I think the reason for this is that there's one really important-- there are several forces here, but one really important factor is that hearing impairment is something that really involves abnormal functioning at the level of the cochlea. So the actual signal processing that's being done at the cochlea really changes when people start to lose their hearing. And so there's always been pretty strong impetus coming, in part, from the NIH to try to understand hearing impairment and to know how to treat that. And knowing what happens at the front end of the auditory system really has been critical to making that work. In contrast, most vision impairments are optical in nature, and you fix them with glasses. Right? So it's not like studying vision is really going to help you understand visual impairment. And so there was never really that same thing. And so, when psychology sort of gradually got more and more cognitive, vision science went along with it. That really didn't happen with hearing. And I think part of that was the clinical impetus to try to continue to understand the periphery. The auditory periphery was also just harder to work out, because it's a mechanical device, the cochlea. And so you can't just stick an electrode into it and characterize it. It's actually really technically challenging to work out what's happening. And so that just kept people busy for a very long time. But as psychology was sort of advancing, what people in hearing science were studying kind of ceased to really be what psychologists found interesting. And so the field kind of dropped out of psychology departments and moved into speech and hearing science departments, which were typically at bigger state schools. And they never really got on the cognitive bandwagon in the same way that everybody in vision did. And then what ends up happening in science is there's this interesting phenomenon where people get trained in fields where there are already lots of scientists. So if you're a grad student, you need an advisor, and so you often end up working on something that your advisor does. And so if there's some field that is under-represented, it typically gets more under-represented as time goes on. And so that's sort of been the case. You know, if you want to study, you know, olfaction, it's a great idea, right? But how are you going to do it? You've got to find somebody to work with. There's not many places you can go to get trained. And the same has been true for hearing for a long time. So that's my take on that part of it. Hynek, do you have anything to say on the computational? HYNEK HERMANSKY: No, I don't know. I'm thinking, if it is something that also evolved with tools available. Because in the old days it was easier to generate the sounds than to generate images. On the computer now, it's much easier, right? So vision and visual research became sexier. So I teach auditory perception to engineers. Also I teach a little bit of visual perception. And I notice that they are much more interested in visual perception, especially the various effects. And, you know, because somehow you can see it better. And the other thing is, of course, funding. I mean, you know, the hearing research's main applications are, as you said, hearing prostheses. These people don't have much money, right? The speech recognition, we didn't get much of the benefits yet from hearing research, unfortunately. So I wonder if this is not also a little bit-- JOSH TENENBAUM: Can I add one? So, maybe-- this is sort of things you guys also gesture towards, but I think in both-- to go back towards similarities and not just differences. Maybe that's what will be my theme. In both vision and hearing or audition, there's, I think, a strong bias towards the aspects of the problem that fit with the rest of cognition. And often that's mediated by language, right? So there's been a lot of interest in vision on object recognition, parts of vision that ultimately lead into something like attaching a word to a part of an image or a scene. And there's a lot of other parts of vision, like certain kinds of scene understanding, that have been way understudied until recently also, right? And it does seem like the parts of hearing that have been the focus are speech. I mean, there are lots of people in-- it's maybe not as much as vision or object recognition, but certainly there's a lot of mainline cognitive psychology who studied things like categorization of basic speech elements and things that just start to bleed very quickly into psycholinguistics. Right? Whereas, the parts of hearing-- at least that's been where a lot of the focus. But the parts of hearing that are more about auditory scene analysis in general, like sound textures or physical events or all the richness of the world that we might get through sound, has been super understudied. Right? But echoing something Josh said also or implicitly is, just because it might be understudied and you need to find an advisor doesn't mean you shouldn't work on it. So if you have any interested in this, and there's even one person, say, who's doing some good work on it, you should work with them. And it's a great opportunity. JOSH MCDERMOTT: If I could follow up and just say that my sense of this is, if you can bear with it and figure out a way to make it work, in the long run it's actually a great place to be. It's a lot of fun to work in an area that's not crowded. ALEX KELL: All right. Obviously, a large-- to kind of transition, a large emphasis of the summer school is on potential symbiosis between, like, machine and, like, engineering and science. And so, what have been the most kind of fruitful interactions between machine perception, either vision or hearing, over the years, and are there any kind of lessons that we can learn in general? HYNEK HERMANSKY: You know, I went a little bit backwards, quite frankly. I'm trained as an engineer, and I was paid always to build better machines. And in the process of building better machines, you discover that, almost unknowingly, we were emulating some properties of hearing. So then I, of course, started to be interested, and I wanted to get more of it. So that's how I got into that. But I have to admit that in my field, we are a little bit looked down at. [INTERPOSING VOICES] HYNEK HERMANSKY: Because mainly not all that much, because engineers are such that [INAUDIBLE] that if something works, they like it and they don't much want to know why, you know? Or at least they don't talk much about it. So it's interesting when I'm in engineering meetings, they look at me as this strange kid who works also on hearing. And when I'm in an environment like this, people look at me like a speech engineer. But, I mean, I don't even feel either in some ways. But what I was wondering when Josh was talking about, is there anybody in this world who works on both? Because, you know, that, I think, is much more needed than anything else. I mean, somebody who is interested in both audio and visual processing and is capable of making this-- JOSH TENENBAUM: So not both science and engineering, but both vision and audition. HYNEK HERMANSKY: Yes, that's what I mean, vision and audition, with a real goal of trying to understand both and trying to find the similarities. Because, personally, I got some inspiration from visual research, even when I work with audio. But I don't see much of it anymore. And there are some basic questions which I would like to ask-- and maybe I should have sent it in-- which is like, I don't even know which most are the similar and different in audio and vision. Should I look at time? Should I look at modulations? Should I look at the frequencies? Should I look at the spatial resolution? And so on and so on. So I don't know if somebody can help me with this. I would be very happy to go home knowing that. I mean, sometimes I suspect that spatial resolution and frequency resolution in hearing are similar. I'm thinking about modulations in speech. Josh talked a lot about it. But there must be a modulation in vision also. But, of course, we never studied much of the vision of the moving pictures. A lot of vision research was fixed. Basically, the images, right? It was a little bit like, in speech, we used to study vowels. We don't do it anymore, because the area of speech is something very, very different. The same thing is in image processing. I think that now it's getting more and more, because, again, I mean I'm going back to availability of the machines. You can do some work on moving images and on video and so on and so on. But I don't know how much of that is happening. And so my question is really, are there any similarities, and on which level they are? [INAUDIBLE] There is a time, there is a spatial frequency, there is a carrier frequency, there are modulations in speech. I don't know. I mean, I would like to know this. I mean, that's something somebody can help me. DAN YAMINS: Oh, I was-- I had-- I think those are interesting questions, but I actually-- I'm sure I don't have the answers at this point. But I was going to say something a little-- you know, going back to the original general question, which is, again, you know, this sort of thinking about it from a historical point of view I think is helpful. In the long run, I think what's happened over the past 50 years is that biological inspiration has been very helpful at injecting ideas into the engineering realm that end up being very powerful. Right? I mean, I think we're seeing kind of the arc of that right now in a very strong way. I mean, you know, in terms of vision and audition, the sort of algorithms that are most dominant are ones that were strongly biologically inspired. And there had been a historical arc over, I think, a period of decades, where, first the algorithms were sort of biologically inspired back in the '50s and '60s, and then they were not biologically inspired for a while. Like, the biology stuff didn't seem to be panning out. That was sort of the dark ages of, kind of, neural networks. And then more recently that has begun to change. And, again, biologically inspired ideas seem to be very powerful, for creating, you know, algorithmic approaches. But the arc is a very long one, right? And so it's not like, you know, you discover something in the lab and then the next day you would go implement it in your algorithm and suddenly you get 20% improvement on some task. All right? That's not realistic. OK? But if you're willing to have the patience to wait for a while, and sort of see ideas sort of slowly percol up, I think they can be very powerful. Now, the other direction is really also very interesting, like using the algorithms to understand neuroscience, right? That's one where you can get a lot of bang for your buck quickly, right? And it's sort of like-- it's like that's a short-term high, right? Because what happens is that you take this machine that you didn't understand and you apply it to this problem that you were worried about and that is sort of scientifically interesting, and suddenly you get 20% improvement overnight. Right? That is feasible in that direction, e.g., taking advances from the computational side or on the algorithmic side and applying them to understanding data in neuroscience. That does seem to have been borne out, but only much more recently. So, there wasn't much of that at all for many decades. But more recently that has begun to happen. And so I think that there is a really interesting open question right now as to which thing, which direction, is more live. Which one is leading at this point, I think, is a really interesting question. Maybe neither of them is leading. But I think that certainly on the everyday, on-the-ground experience, as somebody who is trying to do some of both, it feels like the algorithms are leading the biology. OK? Are leading the neuroscience, to me, in the sense that I feel like, in the short run at least, things are going to come out of the community of people doing algorithms development that are going to help understand neuroscience data before specific things are going to come out of the neuroscience community that are going to help make better algorithms, like in the short run. OK? Again, I think the long run can be different. And I think that's a really deep open research program question, is which tasks are the ones that you should choose such that learning about them from a neuroscience and psychology point of view will help, in the five- to 10-year run, make better algorithms. And if you can choose those correctly, I think you really have done something valuable. But I think that's really hard. ALEX KELL: Yeah. I want to push back. I like how you said how the engineering is really helping the science right now, but the science-- the seed that science planted in the engineering. Like, what you're talking about is just CNNs, basically. DAN YAMINS: Well, I think not entirely, because I think that there are some ideas in recurrent neural networks. ALEX KELL: OK, sure. Neural networks generally. But the point is the ideas that were kind of inspired from that have been around for decades and decades and decades. Are there other kinds of key examples besides, like, the operations that you throw into a CNN? The idea of convolution and the idea of layered computation-- these are obviously very, very important ideas, but, like, what are kind of other contributions that science has given engineering besides--? DAN YAMINS: Well, Green and Swets. I mean, the thing that he mentioned earlier about Green and Swets is another great example. ALEX KELL: Yeah. DAN YAMINS: Right.? Psychophysics helped understand signal detection theory. But that's much older, but that's a very clear example. HYNEK HERMANSKY: Signal detection theory didn't come from Green and Swets. It came from Second World War. DAN YAMINS: I was just thinking of all the work-- HYNEK HERMANSKY: They did very good work obviously, and they indeed were auditory people. DAN YAMINS: And they were actually-- they were doing a lot of work during-- government-- JOSH MCDERMOTT: They formalized a lot of stuff. DAN YAMINS: Yeah, and they did a lot-- HYNEK HERMANSKY: Yeah, I don't want to take anything away from them. DAN YAMINS: But, you know, it's interesting. There's this great paper that Green and Swets have where they talked about their-- HYNEK HERMANSKY: --was engineering. DAN YAMINS: They talked about their military work, right? And they did-- they actually worked for the military, just, like, determining which type of plane was the one that they were going to be facing. And so, yeah, I agree that came out of that. HYNEK HERMANSKY: If I want to still-- if I can still spend a little bit of time on engineering versus science, we also are missing one big thing, which is, like, Bell Labs. Bell Labs was the organization which paid people for doing-- having fun. Doing really good research. There was no question that, at the time, Bell Labs were about speech and about audio. So there was-- a lot of things were justified. And even, like, Bela Julesz and these people-- they pretended they are working on perception because the company wanted to make more money on the telephone calls. This has gone. Right? Both. Speech is gone. Bell Labs is gone. And maybe image is high-- image processing is in, because the government is interested in finding various things from the images and so on and so on. So, a lot of that is funding. Since you mentioned neural networks, it never stops amazing me that people would call artificial neural networks anything similar to biology. I mean, the only thing which I see similar there maybe are now these layered networks and that sort of things. ALEX KELL: I think a lot of the concepts were inspired by that. I don't think it was, like, directly-- like, I don't think anyone takes it as a super serious-- HYNEK HERMANSKY: But there I still have maybe one point to make. Most of the ideas which are now being explored in neural networks are also very old. The only thing is that we didn't have the hardware. We didn't have the means, basically, of doing so. So technology really supported this, and suddenly, yeah, it's working. But to some people it's not even surprising that it's working. They say, of course. They say, we couldn't do it. DAN YAMINS: I think what was surprising to them was that it didn't work for so long and that people were very disappointed and upset about that. And then, you know-- but I agree that basically there's all these, like-- all the ideas are these 40-year-old or 50-year-old ideas that people had thought of, typically many of them coming out of the psychology and neuroscience community a long time ago but just couldn't do anything about it. And so that takes a long time to bear fruit, it feels like. GABRIEL KREIMAN: So I have more questions rather than answers, but to try to get back to a question about vision and hearing and how we can synergistically interact between the two. First, I wanted to lay out a couple of biases and almost religious beliefs I have on the notion that cortex is cortex, meaning that there's a six-layer structure, that there are some patterns of connectivity that have been described both in the vision-- visual cortex as well as auditory cortex. They are remarkably similar, and we have to work with the same type of hardware in both cases. The type of plasticity learning rules that have been described are very similar in both cases. So there's a huge amount of similarity to the biological level. We use a lot of the same vocabulary in terms of describing problems about invariants and so on. And yet, at the same time, I wanted to raise a few questions, particularly to demonstrate my ignorance in terms of the auditory world and get answers from these two experts here. I cannot help but feel that maybe there's a nasty possibility that there are differences between the two, in particular the role of timing has been somewhat under-explored in the visual domain. We have done some work on this, some other people have. But it seems that timing plays a much more fundamental role in the auditory domain. Perhaps the most extreme example is sound localization, where we need to take into account micro-second differences in the arrival of signals within the two ears. I don't know anything even close to that in the visual domain. So that's one example where I think we have to say that there is a fundamental difference. Now thinking more about sort of the recognition questions that many of us are interested in, I think, again, timing seems to play a fundamental role in the auditory domain. But I would love to hear from these two experts here. I easily come up with questions about what is an object in the auditory domain that's sort of defined in a somewhat heuristic way in the visual domain? But we all sort of agree on what objects are. And I don't know what the equivalent is in the auditory domain. And how much attention should we pay to the fact that the temporal evolution of signals is a fundamental aspect in the auditory world, which we don't really-- by and large, we don't really think about too much in the visual domain. With that said, I do hope that at the end we will find similar fundamental principles and algorithms, because, as I said, cortex is cortex. JOSH MCDERMOTT: I can speak to some of those issues a little bit. Look, I think it can be-- I mean, it's an interesting and fun and, I think, often useful exercise to try to map, kind of, concepts from one modality onto another. But, again, at the end of the day, the purpose of perception is just to figure out what's out there in the world, and the information that you get from different modalities, I think in some cases it just tells you about different kinds of things. So sound is usually created when something happens, right? It's not quite-- it's not quite the same thing as there being an object there off of which light reflects. I mean, sometimes there's, in some sense which there's an object there. Like a person, right? Persons producing sound. But, oftentimes, the sound is produced by an interaction between a couple of different things. So, really, the question is sort of what happened, as much as what's there. And so, you could probably try to find things that are analogous to objects, but it's-- in my mind it may just not be exactly the right question to be asking about the sound. JOSH TENENBAUM: Can I just comment on that one? Yeah, I mean, I think, again, this is a place where Gabriel and I have somewhat different biases, although, again, it's all open. But an object to me is not a visual thing or an auditory thing. An object is a physical thing, right? So those of you who saw Liz Spelke's lectures on this, this is very inspiring to me that from very early on infants have a concept of an object, which is basically a thing in the world that can move on its own or be moved. And the same principles apply in vision but also haptics. And, you know, it's true that the main way we perceive objects is not through audition, but we can certainly perceive things about objects from sound and often just, echoing what Josh said, it's the events or the interactions between objects that make sounds. They make the-- physically cause sounds. And so it's often what we're learning from sound is-- GABRIEL KREIMAN: But maybe if I could ask-- I don't disagree with your definition of objects, a la Spelke and so on. But I guess in the auditory domain, if I think about speech, you know, are we talking about phonemes? Are we talking about words? I mean, if we talk about Lady Gaga or Vivaldi, are we talking about a whole piece of music, a measure, a tone, a frequency? These are things that-- JOSH TENENBAUM: So, structure, sort of structure more generally. GABRIEL KREIMAN: What's the unit of computation that we should think about algorithmically? In the same way that Dan and us and many others think about algorithms that will eventually have labels and objects, for example. I mean, what are those fundamental units? And maybe the answer is all of them, but-- JOSH TENENBAUM: Well, speech is really interesting, because from one point of view, you could think of it as, like, what-- it's basically, like-- it's an artifact, right? Speech is a thing that's created through biological and cultural evolution, manipulating a system to kind of create these artificial event categories, which we can call phonemes and words and sentences and so on. And, you know, surely there was audition before there was speech, right? So, it seems like it's building on a system that's going to detect a more basic notion of events, physical interactions, or things like babbling brooks or fires or breezes. And then animal communication. And it hacks that, basically, both on the production side and the perception side. So it's very interesting to ask what's the structure? What's the right way to describe the structure in speech? It probably seems most analogous to something like gesture, you know? That's a way to hack the visual system to create these events visually. Salient changes in motion, whether for just non-verbal communication or something in sign language. It's super interesting, right? But it's, again-- I wouldn't say-- the analog-- speech isn't a set of objects. It's a set of structured events, which have been created to be perceivable by a system which was evolutionarily much more ancient one, perceiving object interactions and events. JOSH MCDERMOTT: But I also think it's a case that-- yeah, there's a lot of focus on objects and vision, but it's certainly the case that vision is richer than just being about objects, right? I mean, you have-- there-- right? I mean, there's-- I think in some sense, the fact that you are posing the question, it's a reflection of where a lot of work has been concentrated on. But, yeah, there's obviously-- you know, you have scenes, there's stuff, not just things, right? And the same is true in audition. And the difference is just that there isn't really as much of a focus on, like, things, only because those are not-- GABRIEL KREIMAN: Here's the fundamental question I'm trying to raise, as well as the question about timing. In the visual domain, let's get away from objects and think about action recognition, for example. And that's one domain where you would think that, well, you have to start thinking about time. It's actually extremely challenging to come up with good stimuli that you cannot recognize-- where you cannot infer actions from single frames. And I would argue, but-- JOSH TENENBAUM: Let's talk about events. GABRIEL KREIMAN: But let me say one more thing. But please correct me if I'm wrong. I would argue that in the auditory domain, it's the opposite. It's very hard to come up with things that you can recognize from a single incident. JOSH MCDERMOTT: Sure. GABRIEL KREIMAN: You need time. Time is inherent to the basic definition of everything. In the visual domain-- again, we've thought about time and what happens if you present parts of objects asynchronously, for example. And you can disrupt object recognition or action recognition in that way. But it's sort of-- again, you can do a lot without time or without thinking too seriously about time. Then maybe, I don't know-- time is probably not one of your main preoccupations, I suspect, in the visual domain. HYNEK HERMANSKY: I'm not sure, because one of the big things which always strikes me in vision is the saccade and the fact that we are moving eyes, and the fact that it's possible even to lose the vision, basically, if you really fix the things on the retina, and so on and so on. So vision probably figured out different ways of introducing time into perception, basically moving eyes and maybe in sounds. Indeed, it's happening more, like, already out there. But, you know, I had one joint project actually where we tried to work on audio-visual recognition. And it was the project about recognizing unexpected things. And that was a big pain initially, because, of course, vision people thinking one way or auditory people thinking another way. But eventually we ended up with the time and with the surprises and with the unexpected and with the priors. And there's been a lot of similarities between audio and visual world, you know? So that's why I was maybe saying in the beginning, people should be more encouraged-- now I'm looking at the students-- to look at both. I mean, don't just say I'm a visual person and I just want to know a little bit about speech or something. No. I mean, these things are very interesting. And, of course I mean in auditory world, there are problems that are very similar to visual problems. And in the visual world there are very similar problems to auditory work. You just take a speech and take a writing, right? And be it handwriting or being even printed things. I mean, these things communicate messages, communicate information, in a very similar way. So, I would just say I got a little bit excited because I finished my coffee. But I would just say, let's look for the similarities rather than differences, and let's be very serious about it. Like, sort of say, oh, finally I found something. Like, for instance, I give you one little example. We had a big problem with a perceptual constancy when you get linear distortions in the signal. And I just accidentally read some paper by David Marr at the time, and I didn't understand it. I have to say I actually missed [INAUDIBLE] a little bit. But still, it was a great inspiration. And I came up with an algorithm which ended up to be a very good one. Well, at the time. I mean, I was being beaten many times. But, you know, let's just look for the similarities. That's what I'm somehow, maybe arguing. And that was also my quest-- like, I don't even know what is similar and different in auditory and visual signals. So, find-- certainly maybe-- on a certain level, it must be the same, right? The cortex is very, very similar. So I believe that, indeed, at the end we are getting information into our brain, which is being used for figuring out what's happening in the world. And there are these big differences at the beginning. I mean, the senses are so different. JOSH TENENBAUM: Could I nominate one sort of thing that could be very interesting to study that's very basic in both vision and audition, of where there are some analogies? Which is certain kinds of basic events that involve physical interaction between objects. Like, I'll try to make one right here. Right? OK. So there was a visual event, and it has a low level signal-- a motion signal. There was some motion over here. Then there was some other motion that, in a sense, was caused. There was some sound that went with it. There was the sound of the thing sliding on the table and then the sound of the collision. We have all sorts of other things like that, Right? Like, I can drop this object here, and it makes a certain sound. And so there's very salient, low levelly detectable, both auditory and visual signals that have a common cause in the world. One thing hitting another. It's also the kind of thing which-- I don't know if Liz mentioned this in her lecture. I mentioned this a little bit. Even very young infants, even two-month-olds, understand something about this contact causality, that one object can cause another object to move. It's the sort of thing that Shimon has shown-- Shimon Ullman has shown. You can, in a very basic way, use this to pick out primitive agents, like hands as movers. So this is one basic kind of event that has interesting parallels between vision and audition, because there's a basic thing happening in the world, an exertion of force between one moving object when it comes into contact with another thing. And it creates some simultaneously detectable events with analogous kinds of structure. I think a very basic question is, you know, if we were to look at the cortical representation of a visual collision event and the auditory side of that, you know? How do those work together? What are similarities or differences in the representation and computation of those kind of very basic events? HYNEK HERMANSKY: If I still may, obvious thing to use is use vision to transcribe the human communication by speech. If somebody wants a lot of money from Amazon or Microsoft or Google or government, you know, work on that. Because there is a clear visual channel, which is being used very heavily, you know? Not only that. I move the hands and that sort of thing. If somebody can help there, I mean, that would be great. And it's actually a relatively very straightforward problem. I'm not saying simple. But it's well defined. Because there is a message, which is being conveyed in a communication by speech. And it's being used. I mean, lips are definitely moving, unless you are working with a machine. And hands are moving unless you are a really calm person, which none of us is. And so this is one-- JOSH TENENBAUM: Just basic speech communication. HYNEK HERMANSKY: Basic speech communication, as Martin [INAUDIBLE] is saying. That would be great, really. JOSH MCDERMOTT: I mean, it's also worth saying, I think, you know, most of perception is multimodal, right? And you can certainly come up with these cases where you rely on sound and have basically no information from vision and vice versa, right? But most of the time, you get both and you don't even really think about the fact that you have two modalities. You're just, you know-- you want to know what to grab or whether to run or whether it's safe to cross the street and, you know-- HYNEK HERMANSKY: Of course, the thing is that you can switch off one modality without much damage. That's OK, because in most of the perception this is always the case. You don't need all the channels of communication. You only need some. But if you want to have a perfect communication, then you would like to use it. But I absolutely agree that the world is audiovisual. JOSH TENENBAUM: This is a comment I was going to add to our discussion list, which I shared with Alex but maybe not the rest, is I think it's a really interesting question, what can be understood about the similarities and differences in each of these perceptual modalities by studying multimodal perception? And to put out a kind of a bold hypothesis, I think that, for reasons that you guys were just saying, because natural perception is inherently multimodal. And it's not just these ones. It also involves touch and so on. I think that's going to impose strong constraints on the representations and computations in both how vision and audition work. The fact that they have to be able to interface with a common system, what, you know, I would think of as a kind of physical object events system. But, however you want to describe it, the fact of multimodal perception's pervasiveness, the fact that you can switch on or off sense modalities and still do something, but that you can really just so fluently, naturally bring them together into a shared understanding of the world, that's something we can't ignore, I would say. GABRIEL KREIMAN: Why are people so sure that in everyday life, most things are multimodal? I'm not really sure how to quantify that. But is there any quantification of this? JOSH MCDERMOTT: No, I don't know of a quantification. All I mean is that, most of the time, I mean, you're listening and you're looking and you're doing everything you can to figure out what happened, right? I mean, it's like, you know, you want to know if there's traffic coming, right? I mean, there's noise that the cars make. You also look, you know? You do both of those. And you probably don't even really think about which of them you're doing. GABRIEL KREIMAN: No. I'm not talking about the most of the time part. Yes, that's a very good example of multimodal experience. I can cite lots of other examples where I'm running and listening to music and they're completely decoupled. Or I'm working on my computer. JOSH TENENBAUM: You don't listen to music when you're driving, right? GABRIEL KREIMAN: I do, but-- GABRIEL KREIMAN: No, but no. But, I mean, not in the way that, like-- sure, you listen to music, obviously. You listen to music when we're driving, but we try-- it's sort of important that it doesn't drown out all other sounds. GABRIEL KREIMAN: I'm just wondering to what extent this-- JOSH TENENBAUM: Ok, fine. ALEX KELL: And how much of that is, like, kind of the particular, like, the modern-- like, in the contemporary world you can actually decorrelate these things in a way that in the natural world you can't. Like, if you are a monkey, these things would probably be a lot more correlated than you are as a human in the 21st century. Like, there would be a [INAUDIBLE] physical world causing the input to both your modalities in a way that you can break now, right? Like, I don't know. That feels-- GABRIEL KREIMAN: You may be right. I haven't really thought deeply about this. [INTERPOSING VOICES] GABRIEL KREIMAN: I'm not [INAUDIBLE] JOSH MCDERMOTT: It would be interesting to compute some statistics of this. GABRIEL KREIMAN: I'm not disputing the usefulness of multimodal perception. I think it's fantastic. I'm just wondering. I think vision can do very well without the other auditory world. And vice versa. DAN YAMINS: We could just close our eyes right now, all of us, and we'd have a fine panel for a while. JOSH TENENBAUM: But many of the social dynamics would be invisible, literally. JOSH MCDERMOTT: No, I think you'd probably get a lot of reciprocity. It's an open question. JOSH TENENBAUM: You'd get some, but, like, there's a difference. Have you ever listened to a radio talk show? Sometimes these days the shows are broadcast on TV and also-- and it's, like, when you watch you're like, oh my-- like, you have a totally different view of what's going on. Or, like, if you're there in the studio. I mean, I totally agree that these are all open questions, and it would be nice to actually quantify, for example, what to me is this often subjective experience. Like, sometimes if the sound is, you know-- I don't know. You turn off the sound on something where you're used to having the sound, it changes your experience, right? Or you turn on the sound in a way that you had previously watched something, right? Like, you could do experiments where you show people a movie without the sound and then you turn on the sound. You know, in some ways transform what they see and in some ways not. So, maybe the right thing to say is more data is needed. DAN YAMINS: But don't you guys think, though, that, like, even independent of multimodal, there's still actually a lot of even more basic questions to be asked about similarity and differences? Like, I mean, just from a very-- from my point of view since that's the only one I'm usually able to take, like, you took a bunch of convolutional neural networks and you train some of them on vision tasks and some of them on audition tasks, right? And you figured out which architectures are good for audition tasks and which are good for vision tasks. See if the architectures are the same, and if indeed the architectures are fairly similar, then, like, looking at the differences between the features at different levels. I mean, I know that that's a very narrow way to interpret the question, but it's one. And there's probably a lot that can be-- JOSH TENENBAUM: You guys have been doing that. What have you learned from doing that? ALEX KELL: We haven't done it that exhaustively. DAN YAMINS: We haven't done it that exhaustively. But suffice it to say that the hints are, I think, very interesting. Like, you begin to see places where there are clear similarities and clear differences and asking, like, where did the divergence occur? Are there any underlying principles about what layers or what levels in the model those divergences start to occur? Can you see similarities at all layers or do you start to see sort of a kind of a clear branching point? Right? Moreover, like what about lower layers, right? I mean, you start to actually see differences in sort of frequency content in auditory data and differences between that and visual data that seem to emerge very naturally from the underlying similarities. You know, underlying differences between the statistics. But still, downstream from there, there are some deep similarities about extraction of objects of some kind or other. You know, auditory objects, potentially. And so I think that's a very narrow way of posing the question. And I don't say that everybody should pose it that way by any means. But I just think that before we get to multimodal interaction, which is interesting, I think there's just this huge space of clear, very concrete ways to ask the question of similarities and differences that are-- like, almost no matter what you'll find, you'll find something interesting. JOSH TENENBAUM: You're saying if we enlarge the discussion from just talking about vision audition to other parts of cognition, then we'll see more of the similarities between these sense modalities, because they will be the differences that stand out in relief with respect to the rest of cognition. Yeah, I mean, I think that's a valuable thing to do, and it connects to what these guys were saying, which is that there's a sense in which this-- you know, something like these deep convolutional architecture seem like really good ways to do pattern recognition, right? This is what I would see as the common theme between where a lot of the successes happened in vision and in audition. And I don't think-- and, again, everybody here has heard me say this a bunch of times-- I think that pattern recognition does not exhaust, by any means, intelligence or even perception. Like, I think even within vision and audition, there's a lot we do that goes beyond, at least on the surface, you know, pattern recognition and classification. It's something more like building a generative model. Maybe this is a good time to-- that's another theme you wanted to bring in. But, you know, something about building a rich model of the world and its physical interactions. And, to me, you know, and, again, something Dan and I have talked a lot about it and I think it's-- you know, you've heard some of this from me, and Dan has got some really awesome work in a similar vein of trying to understand how, basically, deep pattern recognizers-- to me, that's another way we could call deep convolutional pattern-- or just deep invariant pattern recognizers, where the invariance is over space or time windows or whatever it is that deep convolutional-- you know, these are obviously important tools. They obviously have some connection to not just the six layer cortex architecture but these multiple-- you know, the things that goes on in, like, the ventral stream, for example. I don't know, the auditory system as well. But it's going on from one cortical area to a next. A hierarchy of processing. That seems to be a way that cortex has been arranged in these two sense modalities in particular to do a really powerful kind of pattern recognition. And then I think there's the question of, OK, how does pattern recognition fit together with model building? And, you know, I think in other areas of cognition you see a similar kind of interchange, right? It might be-- like, this has come up a little bit in action planning-- like, model-based planning versus more model-free reinforcement learning. And those are, again, a place where there might be two different systems that might interact in some kind of way. I think pattern recognition also is useful all over-- you know, where cognition starts to become different from perception, for example. There's so many ways, but things like when you have a goal and you're trying to solve a problem, do something. Pattern recognition is often useful in guiding problem solving, right? But it's not the same as a plan, right? So, I don't know if this is starting to answer your question, but I think this idea of intelligence more generally as something like-- I mean, the way Laura put it for learning, the same idea, she put it as, like, goal directed or goal constrained-- how did she put it?-- problem solving or something like that, right? That's a good way to-- if you need one general purpose definition of cognition, that's a good way to put it. And then, on the other hand, there's pattern recognition. And so you could ask, well, how does pattern recognition more generally work and what have we learned about how it works in the cortex or computationally from studying the commonalities between these two sense modalities? And then how does pattern recognition play into a larger system that is basically trying to have goals, build models of the world, use those goals to guide its action plans on those models? ALEX KELL: On the public of convolutional neural networks and deep learning, like, they are reaching, like, kind of impressive successes and they might eliminate some similarities and differences between the modalities. But, in both cases, the learning algorithm is extremely non-biological. And I was wondering if any of you guys-- like, infants don't need millions of examples of label data to learn what words are. So I was wondering if you guys have any kind of thoughts on how to make that algorithm more biologically possible? DAN YAMINS: I would go to what Josh said earlier, which is you look at those real physically embodied environment. You look for those low level cues that can be used to, like, be a proxy for the higher level information, right? And then what you really want is-- ALEX KELL: Can you be a little more specific? What do you mean? DAN YAMINS: Well, do you want to be-- JOSH TENENBAUM: I mean, some people have heard this from Tommy and others here about, like, sort of kinds of natural supervision, right? I mean, several people have talked about this, right? Is that what you're getting at? The idea that, often, just tracking things as they move in the world gives you a lot of extra effectively labeled data. You're getting lots of different views of this microphone now, or whatever, for walking around the stage or all of our faces as we're rotating. So, when you pointed to the biological implausibility of the standard way of training deep networks, I think a lot of people are realizing-- and this was the main idea behind Tommy's conversion to now be a strong prophet for people learning instead of being a critic, right?-- was that, the issue of needing lots of labeled training, that's not the biggest issue. There's other issues, like backpropagation as a mechanism of actually propagating error gradients all the way down to a deep network. I think that troubles more people. DAN YAMINS: I have quite the opposite view on that. JOSH TENENBAUM: OK. DAN YAMINS: Yes. I agree that it's true that the specific biological plausibility of a specific deep learning, like backpropagation algorithm is probably suspect. But I suspect that by the same token, there are somewhat inexact versions that are biologically plausible or more plausible anyway that could work pretty well. I think that's less like-- let me put it this way. I think that's a flashy question. I think if you actually end up solving that both from an algorithm point of view and maybe, more importantly, seeing how that's implemented in a kind of real neural circumstance, you'll win the Nobel Prize. But, I mean, I think that-- I feel like that's something that will happen, right? I think that there is a bigger question out there, which is, you know-- I do think that from an algorithmic point of view which things that people don't yet know how to do, how to replace, like, millions of heavily semantic training examples with those other things, right? Like, the things that you just mentioned a moment ago, like, the extra data. Like, it hasn't actually been demonstrated how to really do that. And I feel like the details of getting that right will tell us a lot about the signals that babies and others are paying attention to in a way that's really conceptually very interesting and, I think, not so obvious at this point how that's-- I think it'll happen too, but it will be conceptually interesting when it does in a way that I think that-- JOSH TENENBAUM: Both are pretty interesting. DAN YAMINS: Yeah. JOSH TENENBAUM: Some people are more worried about one or the other. But, yeah. DAN YAMINS: Exactly. And, personally, I would say that, from an algorithm point of view, I'm more interested in that second one, because I think that will be a place where the biology will help us teach how to do better algorithms. JOSH TENENBAUM: Learning about the biological mechanism backpropagation seems less likely to transform our algorithms. Although, again, if you ask Andrew Sax-- he's one person. He's been here. He's think-- that's the question he most wants to solve, and he's a very smart person. And I think he has some thoughts on that. But I-- my sympathies are also with you there. I think there are other things besides those that are both biologically and cognitively implausible that need work, too, so those-- but those are two of the main ones that-- HYNEK HERMANSKY: I think you are touching something very interesting, and one of the major problems with machine learning in general, as I see it, which is like use of transcribed or untranscribed data. And I think that this is one direction, which actually, specifically in the speech, it's a big, big, big problem because, of course, data is expensive, so-- unless you are Google. But even there, you will want to have it transcribe data. You want to know what is inside, and clearly, this is not what-- JOSH TENENBAUM: You guys have this thing in speech. I think this is I'd like to talk more about this because you guys have this thing, particularly at Hopkins, in speech that you call zero resource speech recognition. HYNEK HERMANSKY: Right, that's-- JOSH TENENBAUM: And I think this is a version of this idea, but it's one of the places where studying not just neuroscience. But cognition and what young children do, the ability to get so much from so little is a place where we really have a lot to learn on the engineering side from the science. HYNEK HERMANSKY: Yes, I mean, [INAUDIBLE] that I could speak about it a little bit more in depth. But definitely, this is the direction I'm thinking about, which is like what do you do if you don't know what is in the signal, but you know there is a structure, and you know that there is information you need. And you have to start from the very scratch, figure out where information is, how is it coded, and then use it in the machine. And I think it's a general problem in the same thing as in region. JOSH TENENBAUM: Maybe-- you're asking several different questions. I mean, I don't know if-- have people in the summer school talked about these instabilities It's an interesting question. People are very much divided on what they say. And I do think that generative models are going to come out differently there. But, again, I don't want to say generative models are better than discriminatively trained pattern recognizers. I think, particularly for perception and a lot of other areas, what we need to understand is how to combine the best of both. So in an audience where people are just neural networks, rah, rah, rah, rah, rah, and that's all there is, then I'm going to be arguing for the other side. But that's not that I think they are better. I think they have complementary strengths and weaknesses. This might be one. I think pretty much any pattern classifier, whether it's a neural network or something else, will probably be susceptible to these kind of pathologies where you can basically hack up a stimulus that's arbitrarily different from an actual member of the class that gets classified. Basically, if you're trying to put a separating surface between two or n finite classes-- I was trying to see how to formulate this mathematically. I think you can basically show that it's not specific to neural networks. It should be true for any kind of discriminatively trained pattern classifier. I think generative models have other sorts of illusions and pathologies. But they're definitely going to be-- my sense is they're going to be some of the ways that any pattern classifier is susceptible, that generative models won't be susceptible to. And there will be others that they will be susceptible to. But it's sort of an orthogonal issue. But I think the illusions that generative models are susceptible to, are generally going to have, like, interesting, rational interpretations. It's going to tell you something. They're less likely to be susceptible to just completely bizarre pathologies that we look at and are like, I don't understand why it's seeing that. On the other hand, they're going to have other things that will frustrate us. And my inference algorithm is stuck. I don't understand why my Markov chain isn't converging. If that's the only way you're going to do inference, you'll be very frustrated by the dynamics of inference. And that's where, hopefully, some kind of pattern recognition system will come to the rescue. And if we just look at anecdotal experience, both in speech and vision, there are certain kinds of cases where, like, you know, in a passing rock, you suddenly see a face, or a noise. Or in a tree people will see Jesus's face on arbitrary parts of the visual world. And also sometimes in sound. You hear something. So this idea of seeing signal in noise, we know humans do that. But, for example, there are ways to get deep confidence in vision to see-- you can start off with an arbitrary texture. Have you seen this stuff? And massage it to look like-- like you can start off with a texture of green bars and make it look like a dog to the network, and it doesn't look like a dog to us. And we're never going to see a dog in a periodic pattern of green and polka dotted bars. JOSH MCDERMOTT: But the reason you can do that is because perfect access to the network. Right? And if you had perfect access to visual stimuli [INAUDIBLE].. JOSH TENENBAUM: Sure. Sure. But I'm just saying-- these don't-- well, I don't think so. DAN YAMINS: Of course there's going to be visual illusions in every case. The question is whether or not they're going to make sense to humans-- JOSH TENENBAUM: Right. And I think some of them-- ALEX KELL: --as a test of whether or not that model-- JOSH TENENBAUM: If you learned something from that. DAN YAMINS: --is a real model. JOSH TENENBAUM: Just to be clear, the ones that the convnets are susceptible to that say a generative model or a human isn't-- they're not signs that they're fundamentally broken. Rather they're signs of any discriminatively trained pattern recognizer. I would predict, whether it's good or bad, it's signs of the limitations of pattern recognition. DAN YAMINS: Or signs of the limitation of the type of tasks that are being used of which the recognition is being done. If you replaced something like categorization with something like the ability to predict geometric and physical interactions x period of time in the future, maybe you'd end up with quite different illusions. Right? There's something very brittle about categorization that could lead to, sort of null space as being very broad. JOSH TENENBAUM: Exactly. That's what I mean. By pattern recognition, I mean pattern classification in particular. Not prediction but classification. DAN YAMINS: Right. But I don't think it's yet known whether the existence of these, sort of fooling images or this kind of weird allusions, e.g. the models are bad or do not pick out the correct resolutions. I don't know whether-- people are not totally sure whether that's like, the networks need to have feedback, and that will be what you really need to solve it. It's at that broad level of mechanism. Or is it like, the task is wrong? So it's sort of a little bit less bad. Or maybe like Josh said, it's like the easiest thing would be, well, actually if you just did this with the neural system, you'd find exactly the same thing. But we don't have access to it, so we're not finding it. Right? And so I think it's not totally clear where it is yet. Right? It's a great question, but I feel like the answers are murky right now. ALEX KELL: Yeah. OK. On the topic of feedback, I wanted to kind of move over-- and Gabriel talked about feedback during his talk. And there's really heavy kind of feedback in both of these modalities, where, like, in hearing as Josh talked about, it goes all the way back to, it can alter the mechanics of the cochlea, of the basilar membrane. That's pretty shocking. That's pretty interesting. So what is the role-- Gabriel talked about a couple of specific examples where feedback would actually be useful. Can you say something more broadly about, in general when is feedback useful across the two modalities? Do we think they are kind of specific instances-- can we talk about specific instances in each? GABRIEL KREIMAN: Throughout the visual system there is feedback essentially all over except for the retina. Throughout the auditory cortex, again this feedback and recurrent connections all over. We've been interested in a couple of specific apps in situations where feedback may be playing a role. This includes visual search. This includes pattern completion, feature-based attention. I believe, and hopefully Josh will expand on this that these are problems that at least at a very superficial level also exist in the auditory domain, and where it's tempting to think that feedback will also play a role. More generally, you can mathematically demonstrate that any network with feedback can be transformed into a feed-forward network just by decomposing time into more layers. So I think ultimately, feedback in the cortex may have a lot to do with, how many layers can you actually fit into a system the size of the head that it has to go through-- interesting places at some point. And there are sort of physical limitations to that more than fundamental computational ones. At the heart of this is the question of, how many recurrent computations do you need? How much feedback you actually need, how many recurrent loops you need. If that involves only two or three loops, I think it's easy to convert that into a feed forward network that will do the same job. If that involves hundreds of iterations and loops, it's harder to think about a biological system that will accomplish that. But at least at the very superficial level, I would imagine that-- JOSH TENENBAUM: Can I ask a very focused version of the same question, or try to, which is, what is the computational role of feedback in vision and audition? Like when we talk about feedback, maybe we mean something like top-down connections in the brain or something like recurrent processing. Just from a computational point of view of the problem we're trying to solve, what do we think its roles are in each of those? GABRIEL KREIMAN: So more and more, I think that's the wrong kind of question to ask. If I ask you-- JOSH TENENBAUM: Why? GABRIEL KREIMAN: What's the role of feed-forward connections? JOSH TENENBAUM: Pattern recognition. GABRIEL KREIMAN: There is no role of feed-forward connections. JOSH TENENBAUM: No. On the contrary. Tommy has a theory of it. You know, you have another theory. Something like very quickly trying to find invariant features-- trying to very quickly find invariant features of certain classes of patterns. That's a hypothesis. It's pretty well supported. GABRIEL KREIMAN: There's a lot of things that happen with feed-forward. HYNEK HERMANSKY: If you want a feed so that you can make things better somehow, I mean, you need a measure of goodness first. I mean, otherwise I mean, how to build-- I agree with you that you can make a very deep structure which will function as a feedback thing. But always what worries me the most, and in general a number of cognitive problems, is, how do I provide my machine with some mechanism which tells the machine that output is good or not? If the output is bad, if my image is making no sense, there is no dog but it's a kind of weird mix of green things and it's telling me it's a dog, I need feedback. That's the point I need the feedback, I believe. And I need to fix things. Josh is talking about tuning the cochlea. Yeah, of course that means that sharpening the tuning is possible. But in communication, I mean, if things are noisy I go ahead and close the door. There's the feedback to me. But I know, as a human being, I know information is not getting through, and I do something about it. And this is what we-- JOSH TENENBAUM: That's great You just gave two good examples of, I think, just to generalize those, right, or just to say them in more general terms. One role of feedback that people have hypothesized is like in the context of something like analysis by synthesis. If you have a high level model of what's going on, and you want to see, does that really make sense? Does that really explain my low level data? Let me try it out and see that. Another is, basically saying, the role of the feed-forward connections is a kind of invariant pattern recognizer, and tuning those, tuning the filters, tuning the patterns to context or in particular in a contextual way to make the features more diagnostic in this particular context. Those are two ideas. And they are probably others. HYNEK HERMANSKY: Yeah, you gave the wonderful example with analysis by synthesis. But even there, we need an error measure. And I'm not saying that least mean squared error between what I generate and what I see is the right one. JOSH TENENBAUM: So you think feedback if it helps tune the error measure. HYNEK HERMANSKY: Well, no. It's a chicken and egg problem. I think I need the error measure first, before I even start using the feedback. Because, you know, feedback, we can talk about it. But if you want to implement it, you have to figure out, what is the error measure or what is the criteria? And that I recognize my output is bad or not good enough. Obviously it will a little bit bad. Right? I am not good enough and I have to do something about it. Once I know it, I know what to do, I think. Well maybe not me, but my students, whatever. This is one of the big problems in which we are working on actually-- figuring out, how can I tell that output from my neural net or something is good or bad? And so far I don't have any answer. JOSH TENENBAUM: But I think it's neat that those two or three different kinds of things-- they are totally parallel in vision and audition. They're useful for both, engineering-wise, and people have long proposed them both in psychology and neuroscience. GABRIEL KREIMAN: Generally I think there's a lot to be done in this area, I think. And the notion of adding-- I mean, a lot of what's been happening in commercial networks is sort of tweaks and hacks here and there. If there are fundamental principles that come from studying recurrent connections and feedback from the auditory domain, from the visual domain, those are the sort of things that, as Dan was saying, could potentially sort of lead to major jumps in performance or in our conceptual understanding of what these networks are doing. I think it's a very rich area of exploration, both in vision and the auditory world. ALEX KELL: And wanted to ask Josh one more thing-- the common constraints thing. You were kind of saying like, it seems like there would be common constraints and there are probably consequences of those. What are some kind of specific consequences that you think would come out? Like, how can we think about this? To the extent that there is kind of a shared system, what does it mean? JOSH TENENBAUM: Well-- so again, I can only, as several of the other speakers said, only give my very, very personal, subjectively biased view. But I think the brain has a way to think about the physical world to represent, to perceive the physical world. And it's really like a physics engine in your head. And that the different-- it's like analysis by synthesis that's a familiar idea probably best developed classically in speech, but in a way that's almost independent of sense modality. I think we have different sensory modalities, and they're all just different projections of an underlying physical representation of the world. I think that, whether it's understanding simple kinds of events as I was trying to illustrate here, or many other kinds of things, I think, basically, at some one, one of the key outputs of all of these different sensory processing pipelines has to be a shared system that represents the world in three dimensions with physical objects that have physical reality-- properties like mass or surface properties that produce friction when it comes to motion-- roughness. Right. There's some way to think about the forces, whether the force that one object exerts on another, or the force that an object presents in resisting, when I reach for it, either the rigidity that resists my grasp and that [INAUDIBLE] with it. Or the weight of the object that that requires me to exert some weight to do that. So I think there has to be a shared representation that bridges perception to action. And that it's a physical representation, and that it has to bridge-- it's the same representations that's going to bridge the different sense modalities. DAN YAMINS: Yeah, but to make [INAUDIBLE].. JOSH TENENBAUM: More what? DAN YAMINS: More brain meat-oriented, I think that there's a version of that that could be a constraint that is so strong that you have to have a special brain area that's used as the clearinghouse for doing that common representation. JOSH TENENBAUM: I'm certainly not saying that. DAN YAMINS: OK. Right. No, I didn't think you were. But that would be a concrete result. Another concrete result is like effectively that the individual modality structures are constrained in such a way that they have an API that has access to information from the other modality, so that message passing is efficient, among other things. Right? And I think that they can talk to each other. JOSH TENENBAUM: I think it's often not direct. I think some of it might be, but a lot of it is going to be-- it's not like vision talking to audition, but each of them talking to physics. DAN YAMINS: Right. Right. JOSH TENENBAUM: It's very hard for a lot of-- DAN YAMINS: Right. And that's a third-- JOSH TENENBAUM: Mid-level vision and mid-level audition are hard to talk to each other. DAN YAMINS: Right. And a third possibility is is that it's actually not really so much of-- it that there's no particular brain area, and they're not exactly talking to each other API directly. It's just that there is a common constraint in the world that forces them to have similar structure or sort of aligned structure for representing the things that are caused by the same underlying phenomenon. And that's like the weakest of the types of constraints that you might have. Right? The first one is very strong. JOSH TENENBAUM: But it's not it's not vague or content-less. So, Nancy Kanwisher and Jason Fisher have a particular hypothesis. They've been doing some preliminary studies on the kind of intuitive physics engine in the brain. And they could point to a network of brain areas, some premotor, some parietal. You know, it's possible. Who knows. It's very early days. But this might be a candidate way into a view of brain systems that might be this physics engine. DAN YAMINS: Right. But you are-- JOSH TENENBAUM: Also ventral stream area. DAN YAMINS: [INAUDIBLE] be something like, if you optimized network's set of parameters to do the joint physics prediction interaction task, you'll get a different result than if you sort of just did each modality separately. And that would be a better-- that new different thing would be a better match to the actual neural response patterns in interesting-- JOSH TENENBAUM: Yeah. I think would make really cool thing to explore. DAN YAMINS: And that's, I think the concrete way with cache out. And that certainly seems possible. JOSH TENENBAUM: And it might be, you know, a lot of things which are traditionally called association cortex, right. This is an old idea and I'm not enough of a neuroscientist to know, but a cartoon history is, there are lots of parts of the brain that nobody could figure what they were doing because they didn't respond in an obvious selective way to one particular sense modality. It wasn't exactly obvious what the deficit was. And so they can be called the association cortex. That connects to the study of cross-modal association and association to semantics, and this idea of association. It's this thing you say when you don't really know what's going on. But it's quite possible that big chunks of the brain we're calling association cortex are actually doing something like this. They're this convergence zone for a shared physical representation across different perceptional modalities and bridging to action plan. And a big open challenge is that we can have what feels to us like a deep debate that we can think of as like the central problem in neuroscience of, how do we combine whatever you want to call it-- I don't know, generative and discriminative or model-based analysis by synthesis and pattern recognition synthesis. But actually there's a lot of parts of the brain that have to do with reward and goals. And again, I thought Laura's talk was a really good illustration of this, understanding perception is representing what's out there in the world, but that's clearly got to be influenced by your goal. And what is certainly a big problem is the relation between those. It says something that most of us who are studying perception don't think we need to worry about that. But I think we should, particularly those of us, again, echoing what Laura said-- if we're studying learning, we definitely, I think, need to think about that more than we do. HYNEK HERMANSKY: It may not be exactly related to what you are asking, but I don't know. I believe that we are carrying the model of the world in our brain, and we are constantly evaluating the fit of what we expect to what we are seeing. And as long as we are seeing or hearing what we expect, we don't work very hard, basically, because the model is there. You know what I'm going to say. I'm not saying anything special. I look reasonable and so on and so on. And when the model of the world is for some reason violated, that may be one way how to induce the feedback, because then suddenly I know I should do something about my perception. Or I may just give up and say-- or I become very, very interested. But I think this is a model of the world priors which we all carry with us. It's extremely important and it helps us to move through the world. That's my feeling. In speech we have a somehow interesting situation that we can actually predict-- say we are interested in estimating probabilities of the speech sounds. But we can also predict them from the language model. Our language model is learned typically very differently from a lot of texts, and it's a lot of things. And so we had quite a bit of success in trying to determine if the world recognizes if it's working well or not, by comparing what they recognize and expect and what it sees. And as long as these things go together well, it's fine. If there is a problem between these two, we have to start working. JOSH TENENBAUM: I think, you know, physics is a source of beautiful math and ideas. I think it's an interesting thing to think about, maybe some tuning of some low level mechanisms and in both sensory modalities might be well thought of that way. Right. But I think it is dangerous to apply too much of the physicist's approach to the system as a whole. This idea that we're going to explain deep stuff about how the brain works as some kind of emergent phenomenon, something that just happened to work that way because of physics. This is an engineered system. Right. Evolution engineered brains. It's a very complicated-- I mean, we have had a version of this discussion before. But I think it's something that a lot of us here are committed to. Maybe not all of us. But the way I see it is, this is a reverse engineering science. And the brain isn't an accident. It didn't just happen. There were lots of forces over many different timescales acting to shape it to have the function that it does. So if it is the case that there are some basic mechanisms, say maybe at the synaptic level that could be described that way, it's not an accident. They were [INAUDIBLE]. I would call that, you know, biology using the physics to solve a problem. And again there's a long history of connecting free energy type approaches to various elegant statistical inference frameworks. And it could be very sensible to say, yes, at some levels you could describe that low level sensory adaptation as doing that kind of just physical resonance process. Or nonequilibrium stat mech could describe that. But the reason why nature has basically put that physics interface in there is because it's actually a way to solve a certain kind of adaptive statistical inference problem. ALEX KELL: All right. Cool. Let's thank our panel. [APPLAUSE] |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_64_MVPA_Window_on_the_Mind_via_fMRI_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. REBECCA SAXE: There's a whole bunch of limitations of Haxby-style correlations-- one of them is that all the tests are binary. The answer you get for anything you test is that there is or is not information about that distinction, so there's no continuous measure here. It's just that two things are more-- they are different from one another or they are not different from one another. And so once people started thinking about this method it became clear that this is actually just a special case of a much more general way of thinking about fMRI data. So this particular method-- using spatial correlations-- is very stable and robust, but it's a special case of a much more general set. And here's the more general idea. The more general idea is that we can think of the response pattern to a stimulus in a set of voxels, for example-- the voxels in a region-- we can think of that response pattern as a vector in voxel space. So every time you present a stimulus you get the response of all the voxels. Now, instead of thinking of that as a spatial pattern, think of that as a vector in voxel space. Every voxel defines a dimension, and the position in voxel space is how much activity in each of those voxels there was. Can everybody do that mental transformation? This is, like, the key insight that people had about MVPA-- is we had been thinking about everything in space-- in the space of cortex-- but instead of thinking of a spatial pattern on cortex, treat each voxel as a dimension of a very multi-dimensional space. Now, the response to every stimulus is one point in voxel space. OK? As soon as you think of it that way, then your mental representation of fMRI data looks like that. Right? So your mental representation of fMRI data used to be a BOLD response, and then it was a spatial pattern of a cortex, and now it's a point in voxel space. And if you can follow those three transformations then you realize that a set of points in a multi-dimensional space is the kind of problem that all of machine learning for the last 20 years has been working on. Right? And so everything that has ever happened in machine learning could now be used in fMRI, because-- well, almost-- because machine learning has just absolutely proliferated in both techniques and problems and solutions to those problems for handling data sets where you have no idea where the dataset came from, but it's now represented as multiple points in a multidimensional space. And so that's what happened about five years ago-- is that people realized that we could think of fMRI as the response to every stimulus as a point in voxel space. A set of data is a set of points in voxel space. Now, do anything you want with that. And the first most obvious thing to do is to think of this as a classification problem. OK? So we created conditions in our stimuli or dimensions in our stimuli, so now we can ask, can we decode those conditions? Can we find clusters? Can we find dimensions? Right? All the standard things that people have done when you had points in multi-dimensional spaces. And so, again, the most common thing people now do is now that you think of fMRI data that way, try linear classification of the categories or dimensions that you're interested in, and typically using standard machine learning techniques. So think of training a classifier on some of your data and testing it on independent data and trying to find the right classification techniques that can identify whatever distinction you're interested in in the data set that you built. And so, the way that this one looks is that you take some-- now, voxels are on the y-axis of this heat map, so we have whatever that is 80, 100 voxels in a region-- maybe more, and for every stimulus you have the response in every voxel to that stimulus. Right? So each of those columns now is a representation of where that stimulus landed in voxel space, and you have a whole bunch of instances. And so now what you're going to do is use the training to learn a potential linear classifier that tells you what was the best way to separate the stimuli that came from one labeled set versus the stimuli that came from some other labeled set. And the test of that is going to be-- take a new stimulus or new response and use the classification you learned to try to decode which stimulus that that came from, and measure your accuracy. And so the new measure of the information in fMRI is going to be classification accuracy. Does that makes sense-- the people with me? OK, because that's where a lot of fMRI is right now-- is now thinking about responses to stimuli as points in voxel space and the problem as one of classification accuracy in independent data. OK. Here's one experiment that we did where we use classification. So now, another thing just to note is that in this context you're often trying to classify a single trial. Right? So in our case, we're always trying to classify a single trial, so we've gone from partitioning the data to two halves and asking about similarity, to training on some of the data, and now classifying single independent trials. OK. So here's a case where we tried to do that, and it was an extension of the stuff that I just showed you that you could classify seeing versus hearing, and so we tried to replicate and extend that. So we told people's stories like this-- there's a background-- so Bella's pouring sleeping potion into Ardwin's soup, where her sister, Jen, is waiting. They're holding their breath while he starts to eat. The conclusion of the story is always going to be the same. Bella concludes that the potion worked, and then we tell you based on what evidence she made that conclusion. Another case here is going to be-- Bella stared through the secret peephole and waited. In the bright light she saw his eyes close and his head droop, so that her evidence for the conclusion that the potion has worked. That's OK evidence, and we can vary that in a bunch of ways. So one is we can change the modality for evidence. Instead of seeing something, she can hear something. So for example, she pressed her ear against the door and waited. In the quiet she heard the spoon drop and a soft snore. So that's similar content of information, but arrived at through a different modality. Or we can change how good her evidence is, and so in this case we did it by saying, she tried to peer through a crack in the door. In the dim light she squinted to see his eyes closed. OK, so that's less strong perceptual evidence for the conclusion that the potion has worked. OK. And so now what we're going to ask is-- if we train on one set of stories, on the pattern of activity in a brain region for stories that vary on either of these dimensions, one at a time either vary on modality or vary on quality-- in a new test set, can we decode that dimension? Yeah, and the first answer is we can-- both of them. One thing about this is that this measure isn't binary anymore. So since we're doing for every stimulus we're asking whether we can classify that stimulus or not-- we can get for every subject, so we can get for each item the probability-- for each subject for each item we get a measure of whether it was classified correctly or not, so across objects, across items-- we know for every item the probability of it being correctly classified or not. And then we can ask, is that related to other continuous features of that item? So in this case what we can say, for example, is the quality dimension-- how good your evidence is for the belief that you conclude-- that's a continuous metric-- though it's a continuous feature. It can be judged continuously by human observers, so for each item we can ask, how good is the evidence for the conclusion for this specific story? That judgment by human observers of how good the evidence is, continuously predicts the probability of that item being classified as being good evidence or bad evidence-- even over above the label that we gave it. So if you regress out the labels there's a continuous predictor. So something, like, imagine a neural population that responds-- a sub population that responds more the better the evidence, continuously, so that classification gets better as you get further out on that dimension. It's also not redundant across brain regions, so there's different information in different brain regions. And this is just to show you that in two other brain regions-- so in the right STS we can decode quality, but not modality, and in the left TPJ we can decode modality, but not quality. And the left TPJ we've replicated a bunch of times. In the DMPFC we can't decode modality or quality, but we can decode valence, which is the thing I told you the right TPJ doesn't decode. And then if we go back and look at valence in this dataset-- we can only decode valence in the DMPFC. So this is, to me, this is starting to get cool, right? Three features of other people's mental states represented differentially in different brain regions. This distinction between the more epistemic stuff-- like modality and quality which is represented in the TPJ, and valence which is represented in the DMPFC-- I think is real and deep and hints at one of the really most important distinctions within our theory of mind that I mentioned at the very beginning-- between epistemic states and affective or motivational states. So what's cool about classification analyzes? They have all the same properties as the Haxby-style analyses in principle, because they're actually just a generalization of the Haxby analyses, except that they're a lot less robust, because what you're trying to classify as single trials are single items. And so noisy data collapses faster in these classification strategies than in Haxby-style analyses where you're averaging. But otherwise, those are the same two techniques. What's nice about the classification analyses is you can get item specific outcomes, right? So you can say, for a specific item, how likely it is to be classified as one thing or another? And this is where I started the talk before, which is that in both of these cases we think of a hypothesis and test it sequentially. And so the representational similarity matrix tests whole hypothesis spaces instead of single features. Classification and Haxby-style stuff are ways to think of a future or dimension that might be represented in a brain region you care about, and test whether or not it's represented. So they're a way of thinking of a hypothesis and testing it, and thinking of hypothesis and testing it-- and that's what I mean by sequentially. So you can think of, does the right TPJ represent the difference, for example, between Grace poisoning the person knowingly and poisoning the person unknowingly? The answer to that is yes, it does, but that's one hypothesis. And then we can come up with another hypothesis, and then another hypothesis. And what's interesting about representational dissimilarity matrices-- one of the versions of MVPA people use these days-- is that it takes a different approach. So instead of trying to think of one hypothesis and test it, it proposes a hypothesis space and tests the space as a whole, and that gives you both different sensitivity and strengths and different weaknesses. So I'll work through an example in which we did this. I told you that I would come back to thinking about other people's feelings, and in this experiment we took different kinds of things that people could feel as one subspace of theory of mind. So our stimuli, in this case, are 200 stories about people having an emotional experience. And we're going to look at-- what can we understand about how your brain represents those different-- your knowledge that lets you sort out people's experiences in those cases. OK, so it's hard in the abstract, let's do it in the concrete. So in the behavioral version of this test I give you a list of 20 different emotions-- jealous, disappointed, devastated, embarrassed, disgusted, guilty, impressed, proud, excited, hopeful, joyful, et cetera-- so you have 20 different choices. And I'm going to tell you a single story about a character you don't know, and something they experienced-- very briefly-- and what I want you to think to yourself is, which emotion did they experience in that case? OK? So here's one. After an 18 hour flight, Alice arrived at her vacation destination to learn that her baggage, including camping gear for her trip, hadn't made the flight. After waiting at the airport for two nights, she was informed that the airline had lost her luggage and wouldn't provide any compensation. How many people think she felt joyful? How many people think that she felt annoyed? How about furious? OK, so furious is the modal answer and annoyed is the most likely second choice answer to that case. Here's a different one. Sarah swore to her roommate that she would keep her new diet. Later, she was in the kitchen, she took a bite of a cake she had bought for the dinner party. When her roommates arrived home to find that she'd eaten half the cake and broken her diet. How many people think that she would feel disgusted? Terrified? Embarrassed? OK. And just to give you a sense of how fine grained your knowledge is in this case, think about this difference. In this case she swore she would keep her diet and then broke it, right? What about the difference between-- she first ate a cake and then swore she would keep her diet. Right? That's a totally different texture to the story. OK. So we have incredibly fine grained knowledge of how a description of a situation predicts an overall emotion. You can see that in a behavioral experiment, so what I'm showing you here is-- on the y-axis the emotion that we intended when we wrote the story-- so ten stories for each category for 200 stories-- on the x-axis is the percent of participants picking that label. And so the first thing is that 65% of the time, people pick the label we attend. If instead, you ask, take half the subjects to determine a modal answer and the other half of the subjects as the test set, you get the same answer. There's about 65% agreement on the right, single label out of 20. That's, of course, way above chance, which is 5%, so people are quite good at this. And the off-diagonal is also meaningful, so that also contains information-- the second best answer, right? So annoyed as opposed to furious, for example. OK, so that's a huge amount of rich knowledge about other people's experiences from these very brief descriptions of events to a very fine grained classification of which emotion they're experiencing. And one way to look at these data is to ask, OK, well, that's knowledge that we have-- where is that knowledge in the brain? That's sort of a first question you could ask, and you could ask it by just saying, if we try to use-- so in this case, we're going to do train and test. So we train a classifier on a patch of cortex, based on five examples from each condition, and then we test on the remaining half of the data. And we just ask, based on the pattern of activity in a patch can you get above chance classification in the independent data? For every patch where that's true we put a, sort of, bright mark, and then ask, where in the brain is the relevant decoding that would let you be above chance on this distinction? The answer is in exactly the same brain regions that I have been talking about and showed you before, that is where there's above chance classification, and then overlaid on the standard belief versus photo task in green. So within the brain regions involved in theory of mind or social cognition are the brain regions that can above chance classify in this 20 way distinction. And then this is just looking inside each one of those. Inside each of the regions-- that's four of the regions in that group that I showed you before-- you can do above-- with using just the pattern of activity in that brain region you can do above chance classification on this 20 way distinction. And there's a hint that that information is somewhat non-redundant, because if you combine information across all of them you do slightly better than if you use any one of them alone. OK, so now the question is, how can we study what knowledge is represented in each of these brain regions? Right? So we know that there's some information about that 20 way classification, but can we learn anything about the representation of emotions in those brain regions using fMRI? And that's where the representation dissimilarity matrices come in as a strategy. OK, so the question is, how might you represent the knowledge that you have of what Alice is experiencing, for example, in this story? What's a possible hypothesis? And the way representational dissimilarity matrices work as a strategy for fMRI analyses is that what you should do is think of multiple different hypotheses about how that knowledge could be represented. So a first hypothesis, which is deep in the literature on emotions, is that we represent other people's emotional experience in terms of two fundamental dimensions of emotional experience-- valence and arousal. Have you guys heard of valence and arousal as the two fundamental-- OK. So this hypothesis says, when we think about emotions-- our own or other people-- we put emotions in a two dimensional space, which is, how good or bad did it make you feel, and how intense was it? OK. So terrified is negative and very intense. Lonely is negative, but not that intense. Right? That's the idea. Happy is positive and somewhat intense. Thrilled is happy and more intense. So that idea is that there's these two basic dimensions of emotional experience, and so one thing we can do is have each of our stories, like this one-- we can have people tell us in that story was she feeling positive or negative? How positive or negative, and how intensely? And so, for each individual story we can have a representation of it as a point in that space. And if you use just that, you can classify our 200 stories reasonably well-- not as well as people can, but still reasonably well. OK, so the 200 stories do clump into lumps in that two dimensional space. But another idea is that valence and arousal seem not to capture the full texture of the 20 categories that we originally have. It's not that we can't embed 20 categories in two dimensions-- you obviously can have 20 clusters in a two dimensional space. But we had the intuition that it's not a two dimensional space-- that those two dimensions don't capture all the features that people have and know about when they use the stimuli. And so, based on another literature called appraisal theory-- what we tried to do is capture some of the abstract knowledge that people have about these situations that lets them identify which emotion it is. And we did that by having them rate each of these stories on a bunch of abstract event features. So those event features are things like-- was this situation caused by a person or some other external force. So I hope you guys have the sense that if your luggage gets lost on your way to the trip-- it's different if that was airline incompetence versus a tornado, right? Does everybody have that intuition? The emotion is different. OK. So that's an important abstract feature of our knowledge of other people. Was it caused by you yourself? If you left your luggage at home, that's different from if airline incompetence caused you not to have your luggage. And does it refer to something in her past? Is she interacting with other people? That makes a really big difference, for example, in pride and embarrassment-- whether other people are around. How will it affect her future relationships? So things that potentially cause harm to future relationships feel very different from things that are just annoying right now but will end. So these are abstract features and they encapsulate things we know about emotion relevant features of the situations people find themselves in. So we came up with 42 of these and we had every story rated on all of those dimensions. And of course, we can, again, classify the stories as 20 clusters in a 42 dimensional space, right? Again, of course we can. But the question is [INAUDIBLE] this is those data. This is just every set of 10 stories and their average rating on our-- oh, 38-- on our 38 appraisal features, so that creates a 38 dimensional space. Here the idea is-- for each category-- like, for all the stories about being jealous-- you can get-- for, let's say, for the two dimensions of valence and arousal-- the average value of valence and the average value of arousal, right? So that's a point in a two dimensional space-- the stories about being jealous. OK. Then you take the stories about being terrified. What's their valence and arousal? So that's another point in a two dimensional space. And then you take the distance between them, and that number goes in a representational dissimilarity matrix. So the further away you are in a two dimensional space, the more dissimilar. And you could do the same thing in a 42 dimensional space, a 38 dimensional space, any dimensional space you want-- what you need to know is just how far away you are. And so what a representational dissimilarity matrix has in it is for every pair. So the jealous stories versus the grateful stories-- the number in that cell is the distance from the mean position in your space of all the jealous stories to the mean position in your space of all the grateful stories. Does that make sense? And that could be true of any dimensionality. When you know these 38 features-- so this is behavioral data-- when you know the 38 features of these emotions, the green bar is how well you can classify new items, just behaviorally. So if I give you a new item and all I tell you is it's value in these 38 dimensions, how well can you tell me back which emotion category it comes from? The best you could possibly do is 65%, because that's what human observers do in all of our-- so the reality is the human observers-- the features come from human observers, so our ceiling's going to be 65%, and the answer is about 55%. OK. And you can take that in two different ways. One tendency is to say, wow, we know a lot of the key features that go into emotion attribution. I think, Amy, who I did this work with, had a tendency to feel that way. And I think, wow, we thought of 38 things and we still didn't think of all the important things. Like, what are those other things that we didn't think of that explain the rest of the variation? So you could feel either way about this, but in any case, once you know the position of one of these stories in the 38 dimensional space of these features, you know a lot about which emotion category it came from. And then this is the correlation to the neural RDM data that I showed you. And so, again, what I showed you is, so observer's knowledge-- that's everything that we know that lets us classify a story. Valence and arousal is the yellow bar-- that's just these two features of the story, and they're both less good than this intermediate thing, which is the 38 dimensional space. And one question is, like, do I really think it's 38 dimensions? No, definitely not. That was just the set of all the things that we could think of. How many dimensions is it, really? Again, I don't know, really, but I can tell you that the best ten dimensions capture most of the information from the 38 dimensions. So what we've discovered so far is ten really important dimensions of your knowledge of emotion. I don't, again, think that means that our knowledge of emotion is ten dimensional. Lots of this is limited by the set of stimuli that we chose, the resolution of the data that we have, and so forth and so on. But in these data you need something on the order of ten dimensions to get close to human performance or close to the genuinely differential signal in the neural data. If you take one thing away from this talk about the methods used in representational dissimilarity matrix-- really only one thing. Here's the one thing I want you to know-- the dimensionality of the theory that generated your representational dissimilarity matrix does nothing for you in the fit to your data. Nothing at all. It's a parameter-free fit. OK? So anybody to whom those words mean anything, this will be important, so I want you to actually know this. Representational dissimilarity matrices provide a parameter-free fit to the data, and therefore, the dimensionality of the theory that generated the representational dissimilarity matrix has nothing to do with the fit of the data. You can probably notice I should have ordered this better. Valance has two dimensions, the observers has a lot of dimensions-- I don't know how many, but a lot more than 38. We know that because 38 doesn't explain all their data. So as you go up in-- and in principle, having more dimensions doesn't help in the set. You can correctly see that they overfit rather than fitting the data, and here's why. Because the way you build a representational dissimilarity matrix is, out of no matter how many dimensions you have in your data set for every pair of stimuli, you take one number, and then a representational dissimilarity matrix encodes the relationships among those numbers. OK? So jealousy is more similar to irritation than it is to pride. By how much? OK? And those relative differences is all you have. You have nothing else, and so there's no parameters. Right? You have the same amount of information in a representational dissimilarity matrix that you generated from a one dimensional theory, a two dimensional theory, a 38 dimension theory, and an infinite dimensional theory. The size of the theory doesn't make any difference, because what you get in the end is exactly the same thing-- the relative distance between every two points in the set. There's a few things to say about-- so one thing to say about the representational dissimilarity analysis that I just showed you is that it tells you that the 38 dimensional theory is better than the valence theory. Like, the event feature theory is better than the valence theory, but it doesn't tell you why. Right? It doesn't tell you whether any specific one of those features is capturing variance in any specific one of those regions. It tells you that that whole set was better than this other whole set, and maybe this is where you're getting at. It's much less good for trying post-hoc things for saying, but why? Which aspect of that theory was better than the valence and arousal? It gives you an all things considered answer, not a dimension specific answer. That's one thing that is a limit in the way you should use representational dissimilarity analyses. There's two key problems that I think bear reflecting on about MVPA, and one of them is a catastrophe and the other one is an incredibly deep puzzle. And I think I should just say them right away before you get too excited, because all of this stuff was really exciting and now I'm going to tell you a catastrophe and a puzzle. Here's the catastrophe. The catastrophe is that you can't make anything of null results. OK, now, here's why. Because when I say that you can decode something from an MVPA analysis, what I mean is that at the scale of voxels, there's some signal in terms of which voxels relatively higher or relatively lower in response to the stimuli. Right? So in voxel space or in spatial space, whichever one of those you find helpful-- if that at the level of voxels we could cluster these stimuli. And what that says is that they are something like distinct populations in this region, responding across that feature dimension, and they're spatially segregated enough that we could pick up on them with fMRI. But who cares if they're spatially segregated enough that we could pick up with them with fMRI, right? fMRI is the scale of a millimeter. And there could be many, many, many things that are represented by populations of neurons within a region that are not spatially organized at the scale of a millimeter. Not only could there be-- there absolutely, definitely are. There's a whole bunch of things that we already know are really important properties of neural representations of things we care about, and we know that their spatial scale is not high enough that they can be picked up on with fMRI. So two cases that I'll tell you about because you should care about them-- one is face responses in the middle temporal region that Doris and Winrich study for face representations in monkeys. It's one of the middle ones. In that one there's face features that can tell you how far apart the pupils are, how high the eyebrows are-- did Winrich show you this amazing data? Totally, amazing, beautiful, feature space of face identity representation? One of the most strikingly beautiful things I've ever seen. And he already knows-- he and Doris already know-- that there's no spatial relationship at all between the property that one neuron signals and it's distance from other neurons that signal other properties. There's no spatial organization at all. So if you know that right here is a neuron that responds to eye width, you know nothing more about the neuron next to its preferred property than a neuron a centimeter away. There's no spatial structure to which feature I give a neuron response to, which means that you absolutely could not and cannot pick up on that in fMRI, which Doris has shown. This feature structure information cannot be picked up on with fMRI, even though it is there and really important. Another example is valence and coding in the amygdala. The amygdala contains some neurons that respond to positively valenced events and other neurons that respond to negatively valenced events, and they are as spatially interleaved as physically possible-- that's what Kay Tye's data shows. You couldn't get them more spatially interleaved than they are. They are as close together as the size of the neurons allow. So you absolutely will never be able to decode with fMRI in those population-- the amygdala-- that there are different populations for positive and negatively valenced events, but there are. OK. So that means that when you see something in fMRI it's probably there, but when you don't see it in fMRI you don't know that it's not there. And the reason why that's a total catastrophe is if it means that when I tell you that a region codes A and not B-- I don't know that it doesn't code B. And when I tell you the region that this thing is coded in region A and it's not coded in region B-- I don't know that it's not coded in region B. So I can never show you a double dissociation. I can never show you a single dissociation. I can never show you a dissociation at all. All I can say for sure is that the spatial scale of the information is different between one region and another, or between one piece of information and another, and we have no reason to believe that that matters at all. Right? Really important things are encoded at very fine spatial scales. And so any time I tell you-- which I told you a bunch of times because I think it's really cool-- that there's a difference in what feature is encoded where, you have no reason to believe me. And that's the catastrophe. It's a total catastrophe. If you can't make distinctions, you can't make any conclusions at all. I'll just briefly say the other thing that's a problem with this, which is that this idea of similarity space-- the idea that you should think of a concept, like jealous, as a point in a multidimensional space, and what it means to think of somebody as jealous is to think of them as a certain distance from irritated and angry and proud and impressed-- that idea has been thoroughly undermined in psychology and psychophysics and computational cognition. It's really a bad theory of concepts. It can't do any of the work that concepts are supposed to do. One of the most important things they can't do is compositionality. It can't explain the way concepts compose, which is absolutely critical to the way that we think and even more critical to the way that we think about other people's minds, because every thought you have about somebody else's mental state is a composition of an agent, a mental state, and a content. And so, this whole way of thinking about concepts as points in multi-dimensional spaces works, but shouldn't work. And that's the other problem with this whole endeavor. OK. There's a bunch of things that we're doing with this that I will just briefly mention in case people want to think about it or know about it. The two things I'm really excited about-- one is adding temporal information, so looking at the change in information in brain regions over time, and how they influence one another. And that's my post-doc Stefano Anzellotti's project. And another thing that I'm excited about is that-- to the degree that you take these positive claims as something interesting, which I actually still do in spite of all my end of the world talk-- one thing that I think is really neat is the idea of increasingly differentiable representational spaces. So two sets of stimuli that produce clusters that are not separable-- for example, in voxel or neural space-- and making them increasingly distinct. So Jim DiCarlo calls this unfolding a manifold. Right? That idea, which is Jim DiCarlo's model of the successive processing in stages from V1 to V2 to V4 to IT-- I think that's a really cool model of conceptual development. That what you might have is originally neural responses that can't separate stimuli along some interesting dimension-- that unfold that representational space to make them more dissimilar as you get that concept more-- or that dimension or feature of the stimuli more distinctively represented. And so we've tried a first version of this with justification-- so kids between age seven and 12 get better and better at distinguishing people's beliefs that have good and bad evidence. And we've shown that that's correlated with a neural signature in the right TPJ getting more and more distinct over that same time and those same kids. And so I think thinking of representational dissimilarity as a model of conceptual change, while certainly wrong, is probably really powerful, and I'm very excited about it. And the last thing I will do is thank the people who did the work, especially everybody in my lab, and two PhD students-- Jorie Koster-Hale and Amy Skerry and you guys. Thank you. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_42_Shimon_Ullman_Atoms_of_Recognition.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. SHIMON ULLMAN: Now for a different, entirely different type of issue that has more to do with recognition, some psychophysics, some computer vision. But you will see at the end the motivation was really to be able to, be able to recognize and understand really complicated things that are happening in natural images. Now, when we look at objects in the world, people have worked a lot in object recognition, and we can recognize well complete objects, but we can also recognize a very limited configuration of objects. So we are very good at using limited information if this is what's available in order to recognize what's in there. And this is some arbitrary collection. I guess you can recognize all of them. Some of them, if you think about a person or even a face-- this is a very small part of a face-- everybody I guess knows what it is, right? It's not even a recognizable, well-delineated part like an eye. You see a part of a person here. We know what it is, right? I mean, everybody recognizes this, and so on. Now, I think that the ability to be able to get all the information out even from a limited region plays an important role in understanding images. And let me motivate it by one example. I'll go back to it at the end. When we look at these images, we know what's happening. We know what the action here is. All the people are performing the same action. They are all drinking, even drinking from a bottle, right? But the images as images are very different. If you look at each image, if you stored one image and you try to recognize another image based on the first one, it will be difficult. The variability here can be huge. But if you focus on where the action really takes place, the most informative part where most of the answer is already given to you of what's happening here, which is where the bottle is docked into the mouth, you can see that now these diverse images becomes virtually almost a copy of one another, almost the same. So if you manage to understand this and extract the most informative part, although it's limited and so on, the variability will be much, much reduced. The variability here is much, much reduced compared to the variability that you have in the entire image. So most of the other stuff is much less relevant, but this is where the information is concentrated. And in the limited, restricted configuration, recognition will be much easier, and will generalize much better from one situation to another because of this principle of highly-reduced variability in the delimited image. So we became interested. As you see, it's useful. But it's also-- to deal with small images and still recognize the limited images, you'll see it's much more-- there are some very challenging issues. And I want to discuss it a little bit, and then also discuss what it's good for a little bit more. I will show you some human studies. What we wanted to see is what are the minimal images that people can still recognize. We examined some computational models, and I will give you-- will not keep the secret. It turns out that well-performing current schemes, including deep networks, cannot deal well with such minimal images. And, from this, I want to discuss some implications in terms of representations in our system, brain processing, and things like this. And quite a number of people have been involved in this. Here are the names. Some of them are in the Weizmann Institute in Israel, and a few that-- Leyla is here. Leyla Isik, she is here in the summer school, and a student, Yena Han, at MIT doing some brain imaging on this, which I will mention very briefly. So I'll start with the human study. We are looking for minimal atomic things in recognition, and the experiment goes like this. You show a subject an image and ask them to recognize it, just produce a label. So this is a dog. If they say a dog, they recognize it. And if they recognize it correctly, we generate five descendants from this initial image. If this image was, say, 50 by 50 pixels-- and I'll tell you about pixels in a minute. But say it's 50 by 50 pixels. We make it somewhat smaller. We reduce use it, because it's still not minimal. And we reduce it in five ways. We either copy it at one of the four corners to create, say, a 48 by 48 image by taking two pixels from this corner here, or this corner here, and so on and so forth descendants. And we generate-- we also take the full image. Keep it as is. We do not crop it. We just reduce the resolution. So we resample it so some details start to become lost. Instead of 50 by 50 pixels, it's also a full image but 48 by 48. And then we give each one of these images-- now we have five. We give each one-- this is beginning to expand as a tree. Each one of the five is given again to a subject. If they recognize it, again five descendants are being generated, and we explore the entire tree until we find all the sub-images which are minimal and can be recognizable in this original configuration. Now, this is challenging psychophysically in terms of number of subjects, because we use a subject only once. Because if you show a subject this and he recognizes it, if you show him the same subject, a reduced image, he will recognize the image based on his previous exposure. So you don't you do not want to use him again. So you don't use him again, and you show the other images to a new subject. And this requires a large number of subjects. So 15,000 subjects participated in this experiment online by Mechanical Turk, together with some laboratory controls to see that they are doing the right thing, and how it compares with the same experiment done under laboratory conditions, and so on. So the way we define the minimal image for recognition is in this tree. Here is an image. This image is recognizable. And then we create the five descendants, and none of the descendants is recognizable. So this is recognizable. Nothing here is recognizable. So it's minimal because you can no longer reduce it either by resolution going here or by reducing the size. Any manipulation like this will make it unrecognizable. Technically, when I'm-- in my measuring, if I'm using numbers that the image is 50 pixels or 35 pixels, and so, it's actually well-defined. I mean not the pixels on the screen. You can take the image and make it bigger or smaller on the screen. But the number of sampling points in the image is well-defined. When you give me an image, a particular image, I can tell you how many sample points you need in order to capture this image. Technically, for those of you who know it, it's twice the-- if you do a Fourier transform and take twice the cutoff frequency, the highest frequency in the Fourier spectrum, this is, by the sampling theorem of Shannon, this is the number of points you need in order to. So when I said that the image was 35 pixels, I don't really care. You can make it somewhat smaller or larger on the screen by interpolation. It doesn't change the information content. It's well-defined notion mathematically how many points, discrete points or sampling points in these images. So a very interesting thing that we found when we found these minimal images is that there is a sharp transition when you get to the level of the minimal images. So you go down and you recognize it. And then there is a sharp transition that it suddenly becomes unrecognizable, basically, to the large majority of people. So it can change a little bit, and I'll show you some examples for you to try to see how these minimal images look like at the recognizable level, at the unrecognizable. This is the recognizable level. This is the unrecognizable level. So to show it to you as examples here, I will show you first the unrecognizable one, the one which people find, on average, more difficult to recognize. And if you recognize it, raise your hand. Don't say what you see, because this will influence other people. Just raise your hand if you recognize the image. And then I'll show you the more recognizable one, and let's see if more hands show up, if the distinction between the recognizable and unrecognizable holds here. I'll just show you a couple of examples from the-- OK, so I'll. OK, so this is the one which is supposed to be difficult to recognize. If you see what it is, if you know what's the object, raise your hand. OK, good. OK, don't say what it is. We have two. Let's see here. OK, certainly more hands. What do you see? What do you think? AUDIENCE: Should I say it? SHIMON ULLMAN: OK. Now you can say it because-- AUDIENCE: A horse. SHIMON ULLMAN: A horse. Right. So let me show them side by side. So you see that it's very difficult to recognize what's not recog-- this is, you can see the statistic. This is recognized by 93% of the subjects. 30 subjects saw each one of these images. 93% recognized this. 3% recognized this. And you look at the image and you see that they are very similar images, and it drops from 90% to 3%. So you can see the two images, and you can see the similarity, and you can see the large drop. This is part of the entire tree which is being explored. This is the farther. This is the recognized one. The minimal image. And you can see that even reduce the resolution, which is really not a big manipulation, but this is a drop in performance. And you can see all the-- so we used 50% as our criterion. So the parents should be recognized at higher. This should be recognized at lower. But typically the jump is very sharp. Let's try two more or something just for fun. If you can recognize it, raise your hand. OK. Nobody, just for the record. OK. Look around. You can see many. What do you see? AUDIENCE: A boat. SHIMON ULLMAN: A boat. Right. So you can see the two images. So 80% on this. 0% here. And you can see that what's really missing here is the tip here. And, clearly, this tip is-- there are many contours in this image, but this particular corner sharp makes an enormous difference, and it goes from 80% to 0%. OK, let me skip. Just one more. OK, let me skip this. This is somewhat easier. OK. This is some-- OK, at least one, two, and three. OK. How about this one? Everybody, I think. Or maybe we are missing one. So, again, you can see that the difference-- if you look at the two, there is a difference, and it's this thing here. But it's not a very big part of the image. It's crucial, you know. You have to be trained on this. It's part of your representation. It's important. You go from almost 90% to 15%, roughly. So it's important. So you can see that the drop is typically very, very sharp. And it's also-- the sharp transition is also interesting, in the sense that if it drops from, like the horse, from 90% to 3%, or even here, it also says that we all carry around in our head a very similar presentation. Because if each one of us, based on the history and visual experience, would be less or more sensitive to various features, then we will not find this sharp transition. Different people will lose it at different points in the manipulation. But at 90%, 90% of the people, roughly everybody recognizes it. You remove a feature and it goes to 3%. So everybody is using the same, or very similar, representation, which I find somewhat surprising, at least for some of these images. We don't all have the same kind of experience with horses, or with battleships, or things like that, and still the representation is very strikingly similar across individuals. The experiment was done on 10 different objects. These are the initial objects. I showed you the object at the beginning of the hierarchy, and then you start the manipulation to discover all the minimal images inside them. And here, so we ended up with a very nice catalog. We have a database of all the minimal images in all of these 10 images in all of the children, the unrecognizable ones. So, in terms of modeling and in terms of exploring visual features and what is necessary in order to recognize, and so on, there is a very rich data set here of all the minimal images in all of these 10 images. Here are some more pairs of recognizable and unrecognizable. We already saw this in principle, but just to show some-- in some cases, it's pretty clear what may be going on. For example, this is horse legs, the front legs of the horse. This seems to be important. You can see that very often it's a tremendously small-- in this fly image, very small differences, very hard to pinpoint. And it's glass that you've got in the eyeglasses. Something here is missing a little bit. But very small things in a very reliable way cause this dramatic change. As was mentioned here, somebody mentioned, said the inflection and point, you can manipulate psychophysically a bit more. For example, here, this was another version of a minimal image. It was cropped at two locations. You can crop only the left side, or you can crop only the bottom side, and you can try to see what makes a difference. So you can really zoom in on the critical features. In terms of number of pixels, the impression is that it's surprisingly small. So I guess you can recognize that this is an eagle. This is an airplane. And the number of pixels-- those of you who know vision, your retina has 120 million pixels. The fovea, which is the area of very high acuity, is 2 degrees. It's about 250 by 250, 250 by 200 pixel. This is the area at the center, an area of high acuity. But you can recognize things with, I don't know, 15, 20 pixels. It's 1/10 of your fovea. It's tiny, tiny. You can make it larger, but in terms of how much visual information, I find it surprising that you need very, very little. It's also interesting that it's very useful, in the sense that it's very redundant. If you have the capacity, if you have a visual system that can recognize individually each one of these minimal images, and in fact they can be recognized on their own, then a full image like this contains a high number of partially overlapping minimal images. Some of them are large. You can see each one of these frame, colored frame, is a minimal image, shown not necessarily at the right resolution. You can reduce the resolution of things. But you can see that some images are essentially low-resolution representations of the entire object, like almost the entire eagle. But some of them just contain something relatively small around the head and the eye. For the eye region, you can see that you can get a low-resolution, again, thing of almost everything. But just the corner of the eye and things like that are enough. We find, in general, it seems that things that are related to humans, you have a large number of these minimal images. So they provide a sensitive tool to compare representations to see what's missing in the sub-image which made the image become unrecognizable. So we call them sometimes, these are called minimal recognizable configuration. We call them configuration but not images. Not parts. Not objects because they are not objects. And not parts because, as we saw in the examples, they do not have to be well-delineated parts. They are more like local configuration. But, anyway, minimal images. The next thing that we did is, we were wondering if this kind of behavior, the ability to recognize these images from such minimal information requires-- it places an interesting challenge, or an interesting test of a recognition system, because you really have to extract and use all the available information. By definition, this is minimal. If you do not use all the information that's in this minimal image, then you don't have the minimal information. You have less than that and you will fail. So a system that is not good enough will fail on these minimal images, or the ability to recognize them means that you really can suck out all the relevant information out. So we were wondering what will happen if we show it to various computational algorithms that performed well on full images. What will happen when you challenge them with things which are, by nature, designed to be non-redundant? So here is what I will do. It's not a computer vision school. I will not go too much through the details of the computational schemes, just to show you what was happening. And the bottom line is that they are not doing a good job. Two things happen. First of all, when you train a computational system, you do not see the same drop that you see here, that it recognizes one and doesn't recognize the other. You don't have a drop in recognition. This sort of phase transition that characterizes the human vision system is not reproduced in any of the current recognition systems, including deep network and any other ones. And, secondly, they are not very good at recognizing them. Regardless of the gap, that there is a sharp transition or not, they do not get good recognition results on these minimal images. They do not suck all the necessary information. So in the full images, it's like we had an image of a side view of a plane. So we are training on airplane. You can think of a deep network. We actually tried a whole range of good classifiers. And in all of these good classifiers-- those of you who are not in vision probably got enough at the beginning of this summer school that they have a feeling for a classifier in computer vision. It's a system, an algorithm, a system, a scheme, that you give it training images. You don't have to specify, you don't tell it what to look for. You just give it lots of images and tell them all these are of the same class. And then it calibrates itself, and adjusts parameters, and so on. And then you give it new images, and the system is supposed to tell you if it's a new member of the same class or not. So, in this case, we train the system, giving them full side views of an airplane. But then we gave them just the tails. Compared to random pictures taken from known images, the question is do they reliably can tell you that this is a tail of an airplane, part of the previous class? Or they would be confused and they will give even higher score to things which do not come from airplane at all? So we started this when deep networks were still not the leaders, and we had some other things, like DPM, and including HMAX, which is a very good model of the human visual system and performs very well. And so we included it as well, and deep network as well. This is the HMAX. This is convolutional neural networks. You probably got the idea, it's just worth pointing, I find it interesting in the computer vision community that you have Olympic games every year. It's something which is very structured and very competitive, and very nice in this regard, that there is the Pascal challenge, and the ImageNet challenge, and it's well run. And people who think that they have a better algorithm than others can submit an entry, can submit an algorithm. Everybody gets training images that are distributed publicly, but there are secret images used for testing. And you can train your algorithm on the available data. Everybody uses the same data. And then you submit your algorithm, and the algorithm is run by the central committee on the test images. And the results are published, and everybody knows who's number one, who's number two. You have the gold medal and the silver medal. It's very competitive, and in some sense it's doing very good things. It's sort of driving the performance up. It also has some negative effects, I think, on the way things are being done. One negative is it's very difficult to come up with an entirely new scheme which explores a completely new idea. Because, initially, before you fine tune it, it will not be at the level of the high-performing winners, and until it establishes itself as a winner, it will not get credit. So it sort of becomes a little bit conservative in this regard, which is the unfortunate part. So, as I told you, and I will not go in great detail, the two basic outcomes is that the gap between the recognizable and recognizable-- these two bars are the gap for human vision. That's the whole group of horse images. The parents are highly recognizable. The children, the offsprings, are not recognizable. Very large drop. This drop is not recaptured in this model, in any of the model. If you have a deep network, or you have one of these classifiers, what is recognized and not recognized depends on the threshold. You can decide that. It gives you a number, and it says that I have this and this confidence that this belongs to the class. So what we did here is that we tried to match. We had a class of images, and people recognized them at 80% recognition. So we put the threshold in the artificial, the computer vision system, at such a level that it recognized correctly 80% of the minimal images. So you match them. And then we looked at how many of the sub-images passed the threshold. And you get-- this is for deep network-- that, instead of a gap, you actually got an anti-gap. It actually recognized a few more. But this should not confuse you. It does not mean that the deep network did better than humans. It actually did much worse than humans, although the bars here are higher. And the reason is the following. You can always, even in a very bad classifier, you can get 80% recognition by just lowering the threshold, and then 80% of the class examples will exceed the threshold. The question is how many just garbage image, non-class images, will also pass the threshold at the same time. If you get 80% of the class but also lots and lots and lots of completely false positive, negative images, non-class images are also saying I'm an airplane, then that's bad performance. So just these high bars do not say anything. The actual recognition levels were very low. We can see here for deep networks that this high bar is the performance on new airplanes. So for airplanes it did very well. But the percent correct that it did on minimal images were 3%, or 4%, were very, very, very low. So it did very bad recognition on the minimal images. So recognition of minimal images does not emerge by training any of the existing models that I know in the world, including deep network models. Now, the second test was, as was asked here, is that we did another large test. All of these things, actually, were a lot of effort and time-consuming. Because now we have this. This was in the original test, was a minimal image. I don't know if this was a minimal image. Then we collected a range of tails of planes like this for many other airplanes. And we ran another Turk experiment, which was pretty large because we wanted to verify that each one of these patches that we added to our test and we were going to use for testing recognition, was indeed a minimal image for recognition. So each one of these patches, and there were 60 of those, we ran psychophysically. And we saw that it's recognizable, and if you make it small, if you try to reduce it, it's unrecognizable. So each one of these is individually also a minimal image. So here we did training and testing on-- so this is some examples of this. So here are various images of fly, and each one of them was tested on 30 subjects on the Mechanical Turk. And the results are that, in terms of correct recognition, there is a substantial improvement from 3% to 60%. But 60% is not very large. People recognized them-- I should say you should look at the false alarm. The number of errors, I will show you later. The number of errors that, even after training on minimal images, the performance of the deep network and all the other models on the minimal images is far worse than human recognition levels, human performance, on the same image. So it's not just the gap is not reproduced. Even training with minimal images, the performance is not reproduced. The errors, or the accuracy, is far worse in all the models, including deep network, compared to human vision. So these systems do not do it. It remains to be-- you can always ask, what happens if I train it with 100,000 images and I add and add more and more examples? This we couldn't-- this becomes more and larger. But with the experiments we've done, which are quite extensive, it does not begin to approach human accuracy. Humans are much better. And I'll show you. I think it's not just a competition, who does better. I think there is something deeper there. And that's what I want to go next. Let me skip some. These are the error comparison. And you can see, just as we saw, in a lot of different examples, 0 errors for humans, 17% error in the deep networks, and so on. So those are big differences. OK. A related thing which, I think, gets to the heart of what's going on, that humans can do with these minimal images and model, at the moment cannot, is that we not only recognize these images and say this is a man, this is an eagle, this is a horse. Once we recognize it, although the image itself is sort of atomic, in the sense that you reduce it and recognition goes away, but once we recognize it we can recognize sort of subatomic particles. We can recognize things inside it. So if this is a person, we ask again in the psychophysical test to tell us what you see inside the image using various methodologies, which I'll not go into. But people recognize this. This is a person in an Italian suit, for those of you who could not recognize it. But once people recognize it, they say, this is the neck of the person. This is the tie. This is the knot of the tie. This is part of the jacket, and so on and so forth. I mean, they recognize a whole lot of details, semantic internal details inside. If they see this is the horse, the contrast is low, but they see the ear, and the other ear, and the eye, and the mouth. But if you reduce the image, they lose the recognition completely. Once they recognize it, they recognize a whole lot of structure inside. And I think that the structure, by itself, is the more interesting part, because, really, we don't want to see a horse. We don't want to see a car. We want to know where the car door is, where the knob is. We want to recognize all the internal details. But the ability to recognize all of these internal details is, automatically, it's also helping you with improving the recognition and rejecting sort of false detections. Because these are images the deep network thought that are good images of a man in a suit. But once you dive inside and you say, where exactly is the neck and where exactly is the tie, and is it the right structure that I expect? The answer is that it's not quite appropriate. And you can use that so that this internal interpretation is, first of all, the more important goal of vision. But, in addition, once you do it, you can reject things that appeared, based on the causal structure, to be correct, and in this way you can get the correct recognition. And, for this reason, my prediction is that it will be very difficult to get it with current deep network, because what you'd need is not only to get the label out but to be able to dive down and get the correct interpretation, and inspect it. And it has some properties. The tie, the knot in the tie is slightly wider than the part under it, and so on. So you have to check for the-- you know these things and you check for them. And if you don't do it, then the recognition will remain limited. Now, when you look at it and you say, OK, and we try to develop an algorithm-- which we'll actually dive in and we'll do the internal interpretation, and we'll do them correctly and we'll reject false alarms, and so on-- it turns out that this is an interesting business. You have to be very accurate, and some of the properties and relations that you need to extract are very specific to certain categories and are very precise. For example, this was selected by deep network as a very good example of a horse head. And, basically, it does have the right shape. But, for example, people reject it. We asked people who did not accept it as a horse head, and they said, for example, that these lines are too straight. It looks like a man-made part rather than a part of a real animal. That was a repeating answer, for example. But deviation, how straight is it and so on, this is a bit tricky. And also it didn't have quite the ear that you do expect here. So we think that the kind of feature that you need in order to do this internal interpretation of interest depends on relatively complicated properties and relations that you don't want to spend time and effort doing in a bottom-up way all over the entire visual field. If certain two contours smoothly are in a corner, or if something is really straight, only semi-straight. I mean, to do all of these computations my hunch is, to do all of these complicated things, you need them only in a small-- you need some specific ones for some specific classes at some specific locations. So the right way to do this kind of computation, the right architecture, seems to me a combination of bottom-up and top-down processing. And we know that, in the visual system-- this is a diagram of the visual system, which is supposed to show that we have lots of connections going up, but also a lot of connections going down. And the suggestion that I would like to put up-- and I think it's what's happening here-- is that we have something like deep network that does an initial generic classification. It's bottom-up. It has some kind of-- was trained on many categories. It is not sensitive to all of these small and informative things that you need for internal classification. And it proposes a lot of-- it gives you initial recognition, which is OK. It's especially OK when you have a complete object and not something challenging like a minimal image. Because you may be wrong on a couple of the minimal images, but you have 20 of them in each object. So if two are wrong, it's not too bad. So, under many circumstances, you will be OK in terms of general recognition. But what this does is it doesn't complete the process, but it sort of triggers the application of something which is much more class-specific, that it says, oh, it looks like a horse. Let's check if it has, or let's now complete the interpretation. It's not just a validation, but you really want to know where is the eye, where is the ear, where is the mouth, and so on. You want to know maybe if the mouth is open or closed. You want to feed the horse. You want to pet the horse. I mean, when you interact with objects, all of these things are important. So you continue your understanding of the visual scene. But this is not this generic bottom-up recognition, but you are looking for specific structures that you learned about when you interacted with these objects before. And then you test specific things. Where is the eye? There should be a round thing roughly here, and so on and so forth. So these are more extended routines that you're applying to the detected region, sort of directed from above, and you know what kind of feature to look for at different locations within the minimal image. And this kind of ongoing, continuing interpretation is not just inside, internally, to what you succeeded to recognize, but sort of spread over the entire image. For example, if you look at this image, what do you see here in this image here? Anyone want to suggest what we see here? AUDIENCE: A face, maybe. SHIMON ULLMAN: Sorry? AUDIENCE: A face. AUDIENCE: A woman's face. AUDIENCE: A woman's face. SHIMON ULLMAN: A woman's face. What is the woman doing? AUDIENCE: Drinking. SHIMON ULLMAN: Drinking. Right. So it's a woman drinking, for those of you managed to recognize. This is the woman, and she's drinking from a cup. Now, we tested it. The woman is actually a minimal image. If you remove the cup, you show this image, people recognize it at a relatively high. Nobody recognizes this is a glass when you just show the glass on its own. We think that the actual recognition process in your head starts with recognizing what is recognizable on its own, sort of the minimal configuration which you know what it is. You don't need help. You don't need context. You don't need anything. This is a woman. This is the mouth. And you can continue from there in the same way that you can recognize internally that this is the nose, and the nostril, and this is the upper lip and lower lip. In the same way that you can guide your interpretation process internally, you can also say that the thing which is docked at her mouth is a glass. Some results from-- this has been implemented by Guy Ben-Yosef, who is also now a part of CBMM. And this internal interpretation begins to work interestingly well. We started to do at MIT some MEG studies, because if this is correct, if the interpretation process and the correct recognition of minimal images and the following full interpretation process is driven by its-- requires for completion, it requires the triggering of top-down processing, that we could see it using the right kind of imaging. In this case, we started to do minimal images in MEG images. MEG is-- I think you-- was MEG already mentioned here in any of the talks? So MEG, as you know, it doesn't have very good spatial resolution. It's not like fMRI, but it has very good temporal resolution. And what Leyla-- it was led by Leyla Isik. And what we've done here is trying to let subjects in the MEG recognize minimal images. And we took the electrodes from the MEG and trained a decoder. The decoder is trained to say whether or not the image contains, say, an eagle in this case. And we had various images. And the question is, we can follow the performance of the computational decoder that tries to say now the image, now the electrode, the pattern of electrodes, allow me to deduce that there is an eagle in the image. And we see that the decoder is successful, you can see here, at about 400 milliseconds. This is late for vision. The initial bottom-up initial recognition is more like 150, or something like this. And we also get the same results when we do psychophysics, that in normal images you can recognize them at-- you can get good recognition after, say, exposure of 100 milliseconds followed by a mask to recognize correctly at the human level, to get to the human level that we get. With minimal images, you have to give enough time, which we suspect is enough time, to allow the application of the top-down interpretation within. And if you don't give enough time, then people degenerate and become deep networks, and you get the same kind of performance, roughly. But this is all still unpublished and still running, and we need more subjects. And all of this is looking in the right direction, and looking in providing support for top-down processing for this. And this, by the way, it's interesting methodologically, because it's very difficult. With real images, it's so rich and you get so much information already in the way of going up and because of these redundancies, that even if you make 20% error, it doesn't really matter because you have redundancy, you have many multiple, sufficient minimal images within any object, and so on. So it's very difficult to tease out the effect of where exactly the top-down information starts. Where do you need it? Where exactly you fail if you don't have it. So we think you need it for this internal interpretation and for the correct recognition of minimal images. And here you can start seeing good signals in the MEG. It provides you sort of a tool that is pretty unique and allows you to do these things. So let me add what is a very informal thing but where I think this is going. I think that when you look at difficult images, like action recognition that we discussed below, many things that we do depend not on sort of cause label of there is a person there, or there is an airplane, or there is a dog. But, really, things depend on the fine details of the internal interpretation. And so if you can turn off what I think is the top-down part of class-specific top-down processes, I think that many of these fine distinctions that we make all the time-- and it's what vision is all about. Vision is not about giving cause categories-- will go away. And so these things will become more and more an important part of vision. Let me look at this variability in action recognition. But let me show you some specific examples. This is something that confuses current classifiers, that in most of them it seems that the person is drinking. Because there is a person, there is a bottle, and the bottle is close to the mouth. So the person is drinking at this rough level of description. But, obviously, here this person is drinking, this person is pouring, right? Something very-- is this person drinking at the moment? Yes or no? AUDIENCE: No. SHIMON ULLMAN: No. Why not? She's holding a cup, and it's not far, and maybe on the way to the mouth. We know that she's not drinking, right? But why exactly not? And, again, this is something that is picked up as drinking by many recognition systems. But something is wrong here. All of these things, these are different objects and different actions that the people are performing. This is drinking from a straw. This is smoking. And this is brushing their teeth. But this depends on, you have to go to the right location and decide exactly what's happening there. It's the kind of thing that we do all the time. Some more challenges. These are just sort of informal challenges to show you how we can deal with fine interpretation of details of interest in the image. What is this arrow pointing at? AUDIENCE: Bottle SHIMON ULLMAN: Sorry? AUDIENCE: Bottle. SHIMON ULLMAN: Yeah. But above the bottle, there is something else there. AUDIENCE: Fingers. SHIMON ULLMAN: Sorry? AUDIENCE: Finger. SHIMON ULLMAN: Fingers, right. Let's see. Just playing this time. What is this arrow pointing at? AUDIENCE: Zipper. SHIMON ULLMAN: Zipper. Let's see. Here are two challenging things. Here are two arrows. What is this one pointing at? AUDIENCE: Cup? AUDIENCE: Tea. AUDIENCE: Cup. SHIMON ULLMAN: All right. Next to the cup, right, is also-- this is really challenging. Let's see if some folks. What is this one pointing at? AUDIENCE: A tray? AUDIENCE: A tray. SHIMON ULLMAN: Tray. So the tray, think about it, it's this, but you match it with this thing here in order to make sure, to know that it's a tray. It's not something that will be easily picked up. I mean, I'm looking for difficult things which are a little bit challenging. And you say, ah, I can get it. But this level of detail, interpreting the fine details and images in a top-down fashion happens all the time. Is this person smoking? Of course not, and we are not fooled by it, and we immediately zoom on the right things. And, really, all the information is here at the end of the-- and so on, and so on, and so forth. I mean, we were looking at dealing visually with social interactions, understanding the social interactions between agents. And, again, it's very difficult to do correctly, and it depends on subtle things. I mean, you can get something rough OK. For example, is this sort of an intimate hug, or this just a cordial hug of people who are not-- we know exactly what's going on, right? And it turns out that the features are not that easy to get. This was picked up incorrectly by something that we designed for people hugging. And it's not very far from people hugging, but it doesn't fool us, right? But they are not really hugging. On social interactions, we know interactions even between non-human agents. I mean, this interaction, is this is threatening interaction or a friendly interaction? What do you think? Yeah. Correct. I think so too. Anyway, I think that all of these things that we can do, and I think that vision is about this. It's not about looking at this room and saying that this is a computer and this is a chair. It's about understanding the situation and making fine judgments, and interacting with objects. And, in fact, we're looking at is part of we're doing at CBMM. We are looking at the problem of asking questions about images. So we want a system that you can give it an image and a question, and then we want the system to be able to process the image in such a way that will give you a good answer to the question. This is interesting because it means that it's not just a generic pipeline of running the image through a pipeline, sort of fixed sequence of operations. But, depending on what you're interested in, the whole visual process should be directed in a particular way to produce just the relevant answer. And we looked at a set of-- with students, we looked at a set of some 600 questions that we gave people on the Mechanical Turk images. And we say imagine some questions about these images. Ask some question about these images. And they came up with some images. We looked at them, and an informal observation, initial observation, is that most of these questions that people invented to ask about images, you needed some things which depended on precise internal interpretation of the details. So it's things that come up all the time. You have to dive into the image and analyze the subtle cues that will tell you that these are not hugging, and this is not threatening, and this is not an intimate hug, and so on and so forth. And this is what we are-- the whole story of the minimal images and the internal interpretation. The real goal eventually is to be able to identify the important visual features and structures which are important for this, and thinking about the automatic learning of how to extract the internal structure that will support the interpretation of all these interesting and meaningful aspects of images that, at the moment, we do not have. OK, let me skip this. OK, I think I've said all of these conclusions already in the final comments, so let me stop here. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_12_Gabriel_Kreiman_Computational_Roles_of_Neural_Feedback.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. GABRIEL KREIMAN: What I'd like to do today is give a very brief introduction to neural circuits, why we study them, how we study them, and the possibilities that come out of understanding biological codes, and trying to translate those ideas into computational codes. Then I will be a bit more specific, and discuss some initial attempts at studying the computational role of feedback signals. And then I'll switch gears and talk for a few minutes about a couple of things that are not necessarily related to things that we've made any real work on, but I'm particularly excited about in the context of open question challenges, and opportunities, and what I think will happen over the next several years in the field. In the hope of inspiring several of you to actually solve some of these open questions in the field. So one of the reasons why I'm very excited about studying biology and studying brains is that our brains are the product of millions of years of evolution. And through evolution, we have discovered how to do things that are interesting, fast, efficient. And so if we can understand the biological cause, if we can understand the machinery by which we do all of these amazing feats, that in principle, we should be able to take some of these biological codes, and write computer code that will do all of those things in similar ways. In similar ways that we can write algorithms to compute the square root of 2, there could be algorithms that dictate how we see, how we can recognize objects, how we can recognize auditory events. In short, the answer to all of these Turing questions, in some sense, is hidden somewhere here inside our brain. So the question is, how can we listen to neurons and circuits, decode their activity, and maybe even write in information in the brain, and then trying to translate all of these ideas into computational codes. So there's a lot of fascinating properties that biological codes cover. Needless to say, we're not quite there yet in terms of computers and robots. So our hardware and software worked for many decades. I think it's very unlikely that your amazing iPhone 6 or 5 or 7 whatever it is, will last four, five, six, seven, eight, nine decades. None of our computers will last that long. Our hardware does. There's amazing parallel computation going on in our brains. This is quite distinct from the way we think about algorithms and computation in other domains now. Our brains have a reprogrammable architecture. The same chunk of tissue can be used for several different purposes. Through learning and through our experiences, we can modify those architectures. A thing that has been quite interesting, and that maybe we'll come back to, is the notion of being able to do single shot learning, as opposed to some machine learning algorithms that require lots and lots of data to train. We can easily discover a structure in data. The notion of fault tolerance and robustness to transformations is an essential one. Robustness is arguably a fundamental property of biology and one that has been very, very hard to implement in computational circuitry. And for engineers, the whole issue about how to have different systems integrate information, and interact with each other, has been and continues to be a fundamental challenge. And our brains do that all the time. We're walking down the street, we can integrate visual information, with auditory information with our targets, our plans, what we're interested in doing, on social interactions, and so on. So why do we want to study neural circuits. So I think we are in the golden era right now, because we can begin to explore the answers to some of these Turing questions in brains at the biological level. So we can study high level cognitive phenomena at the level of neurons, and circuits of neurons. And I'll give you a few examples of that later on. More recently, and I'll come back to this towards the end, we've had the opportunity to begin to manipulate, and disrupt, and interact with neural circuits at unprecedented resolution. So we can begin to turn on and off specific subsets of neurons. And that has tremendously accelerated our possibility to test theories at the neural level. And then again, the notion being that empirical findings can be translated into computational algorithms-- that is, if we really understand how biology solves the problem, in principle, we should be able to write mathematical equations, and then write code that mimics some of those computations. And some of the examples of that, we talk about in the visual system in my presentation, but also in Jim DiCarlo's presentation. These are just advertising for a couple of books that I find interesting and relevant in computational neuroscience. I'm not going to have time to do any justice to the entire field of computation neuroscience at all. So all these slides will be in Dropbox, so if anyone wants to learn more about computational neuroscience. These are lot of tremendous books. Larry Abbott is the author of this one, and he'll be talking tonight. So how do we study biological circuitry. And I realize that this is deja vu and very well known for many of you. But in general, we have a variety of techniques to probe the function of brain circuits. And this is showing the temporal resolution of different techniques, and the spatial resolution of different techniques used to study neural circuits. All the way from techniques that have limited spatial and temporal resolution, such as PET and fMRI-- techniques that have very high temporal resolution, but relatively poor spatial resolution-- all the way to techniques that allow us to interrogate the function of individual channels with neurons. So most of what I'm going to talk about today is what we refer to as the neural circuit level, somewhere in between single neurons and then ensembles of neurons recording the local field potential, which give us the resolution of milliseconds, where we think a lot of the computations in the cortex are happening, and where we think we can begin to elucidate how neurons interact with each other. So to start from the very beginning, we need to understand what a neuron does. And again, many of you are quite familiar with this. But the basic fundamental understanding of what a neuron does is to integrate information-- receive information through its dendrites, integrates that information, and decides whether to fire a spike or not. Interestingly, some of the basic intuitions of our neuron function were essentially conceived by a Spaniard, Ramón y Cajal. He wanted to be an artist. His parents told him that he could not become an artist, he had to become a clinician, a medical doctor. So he followed the tradition. He became a medical doctor. But then he said, well, what I really like doing is drawing. And so he bought a microscope, he put it in his kitchen, and he spent a good chunk of his life drawing, essentially. So he would look at neurons, and he would draw their shapes. And that's essentially how neuroscience started. Just from these beautiful and amazing array of drawings of neurons, he conjectured the basic flow of information. This notion that this integration of information through dendrites, all of this integration happens in the soma. And from there, neurons decide whether to fire a spike or not. Nothing more, nothing less. That's essentially the fundamental unit of computation in our brains. How do we think about and model those processes? There's a family of different types of models that people have used to describe what a neuron does. These models differ in terms of their biological accuracy, and their computational complexity. One of the most used ones is perhaps an integrate and fire neuron. This is a very simple RC circuit. It basically integrates current, and then through a threshold, the neuron decides when to fire or not to fire a spike. This is essentially treating neurons as point masses. There are people out there who have argued that you need more and more detail. You need to know exactly how many dendrites you have, and the position of each dendrite, and on and on and on and on. What's the exact resolution at which we should study neuron systems is a fundamental open question. We don't know what's the right level of abstraction. There are people who think about brains in the context of blood flow, and millions and millions of neurons averaged together. There are people who think that we actually need to pay attention to the exact details of how every single dendrite integrates information, and so on. For many of us, this is a sufficient level of abstraction. The notion that there's a neuron that can integrate information. So we would like to push this notion that we can think about models with single neurons, and see how far we can go, understanding that we are ignoring a lot of the inner complexity of what's happening inside a neuron itself. So very, very briefly just to push the notion that this is not rocket science. It's very, very easy to build these integrate-and-fire model simulations. I know many of you do this on a daily basis. This is the equation of the RC circuit. There's current that flows through a capacitance. There's current that flows through the resistance, which, this RC circuit, we think of as composed of the ion channels in the membranes of the neurons. And this is all there is to it in terms of a lot of the simulation that we use to understand the function of neurons. And again, just to tell you that there's nothing scary or fundamentally difficult about this, here's just a couple of lines in MATLAB that you can take a look at if you've never done these kind of simulations. This is a very simple and perhaps even somewhat wrong simulation of an integrate-and-fire neuron. But just to tell you that it's relatively simple to build models of individual neurons that have these fundamental properties of being able to integrate information, and decide when to fire a spike. The fundamental questions that we really want to tackle in CBMM have to do with putting together lots of neurons, and understanding the function of circuits. It's not enough to understand individual neurons. We need to understand how they interact together. We want to understand what is there, who's there, what are they doing to whom, and when, and why. We really need to understand the activity of multiple neurons together in the form of circuitry. So just a handful of basic definitions. If we have a circuitry like this, where we start connecting multiple neurons together, information flows here in this circuitry in this direction. We refer to the connections between neurons that go in this direction as feed forward. We refer to the connections that flow in the opposite direction as feedback and I use the word recurrent connections for the horizontal connections within a particular layer. So this is just to fix the nomenclature for the discussion that will come next, and also today in the afternoon with Jim DiCarlo's presentation. Throughout a lot of anatomical work, we have begun to elucidate some of the basic connectivity between neurons in the cortex. And this is the primary example that has been cited extremely often of what we understand about the connectivity between different areas in the macaque monkey. We don't have a diagram like this for the human brain. Most of the detailed anatomical work has been done in macaque monkeys. So each of these boxes here represents a brain area, and this encapsulates our understanding of who talks to whom, or which area talks to which other area in terms of visual cortex. There's a lot of different parts of cortex that represent visual information. Here at the bottom, we have the retina. Information from the retina flows through to the LGN. From the LGN, information goes to primary visual cortex, sitting right here. And from there, there's a cascade that is largely parallel, and at the same time, hierarchical, of a conglomerate of multiple areas that are fundamental in processing visual information. We'll talk about some of these areas next. And we'll also talk about some of these areas today in the afternoon when Jim discusses what are the fundamental computations involved in visual object recognition. One of the fundamental clues as to how do we understand, how do we know that this is a particular visual area, how do we know that this is important for our vision, has come from anatomical lesions. Mostly in monkeys, but in some cases, in humans as well. So if you make lesions in some of these areas, depending on exactly where you make that lesion, people either become completely blind, or they have a particular scotoma, a particular chunk of the visual field where they cannot see. Or they have more high order types of deficits in terms of visual recognition. As an example, the primary visual cortex was discovered by people who were of the [INAUDIBLE] they were studying, the trajectory of bullets in soldiers during World War I. And by discovering that some of those peoples had a blind part to their visual field, and that was a topographically organized depending on the particular trajectory of the bullet through their occipital cortex. And that's how we became to think about V1 as fundamental in visual processing. It is not a perfect hierarchy. It's not there is A, B, C, D. Right? For a number of reasons. One is that there are lots of parallel connections. There are lots of different stages that are connected to each other. And one of the ways to define a hierarchy is by looking at the timing of the responses in different areas. So if you look at the average latency of the response in each of these areas, you'll find that there's an approximate hierarchy. Information gets out of the retina approximately at 50 milliseconds. About 60 or so milliseconds in LGN, and so on. So it's approximately a 10 millisecond cost per step in terms of the average latency. However, if you start looking at the distribution, you'll see that it's not a strict hierarchy. For example, there are neurons in area V4 that are the early neurons in V4 may fire before the late neurons in V1. And that shows you that the circuitry is far more complex than just a simple hierarchy. One way to put some order into this seemingly complex and chaotic circuitry, one simplification is that there are two main pathways. One is the so-called what pathway. The other one is the so-called where pathway. The what pathway essentially is the ventral pathway. It's mostly involved in object recognition, trying to understand what is there. The dorsal pathway, the where pathway, is most involved in motion, and being able to detect where objects are, stereo, and so on. Again, this is not a strict division, but it's a pretty good approximation that many of us have used in terms of thinking about the fundamental computations in these areas. Now we often think about these boxes, but of course, there's a huge amount of complexity within each of these boxes. So if we zoom in one of these areas, we discover that there's a complex hierarchy of computations. There are multiple different layers. The cortex is essentially a six layer structure. And there are specific rules. People have referred to this as a canonical micro circuitry. There's a specific set of rules in terms of how information flows from one layer to another in terms of each of these cortical structures. To a first approximation, this canonical circuitry is common to most of these areas. There are these rules about which layer receives information first, and sends information to areas are more or less constant throughout the cortical circuitry. This doesn't mean that we understand this circuitry well, or what each of these connections is doing. We certainly don't. But these are initial steps to sort of decipher some of these basic biological connectivity that has fundamental computational properties for vision processing. So our lab has been very interested in what we call the first order approximation or immediate approximation to visual object recognition. The notion that we can recognize objects very fast, and that this can be explained, essentially, as the bottom-up hierarchical process. Jim DiCarlo is going to talk about this extensively this afternoon, so I'm going to essentially skip that, and jump into more recent work that we've done trying to think about top-down connections. But just let me briefly say why we think that the first pass of visual information can be semi-seriously approximated by these purely bottom-up processing. One is that at the behavioral level, we can recognize objects very, very fast. There's a series of psychophysical experiments that demonstrate that if I show you an object, recognition can happen within about 150 milliseconds or so. We know that the physiological signals underlying visual object recognition also happen very fast. Within about 100 to 150 milliseconds, we can find neurons that show very selective responses to complex objects, and again, you'll see examples of that this afternoon. The behavior and the physiology have inspired generations of computational models that are purely bottom-up, where there is no recurrency, and that can be quite successful in terms of visual recognition. To our first approximation, the recent excitement with deep convolutional networks can be traced back to some of these ideas, and some of these basic biologically inspired computations that are purely bottom-up. So to summarize-- and I'm not going to give any more details-- we think that the first 100 milliseconds or so of visual processing can be approximated by these purely bottom-up, semi hierarchical sequence of computations. And this leaves open a fundamental question, which is, why we have all these massive feedback connections? We know that in cortex, there are actually more recurrent and feedback connections than feed-forward ones. And what I'd like to talk about today is a couple of ideas of what all of those feedback connections may be doing. So this is an anatomical study looking at a lot of the boxes that I showed you before, and showing how many of the connections to any given area come from one of these other variants. For example, if we take just primary visual cortex, this is saying that a good fraction of the connections to primary visual cortex actually come from V2. That's from the next stage of processing, rather than from V1 itself. All in all, if you quantify for a given neuron in V1, how many signals are coming from a bottom-up source that is for LGN versus how many signals are coming from other V1 neurons or from higher visual areas, it turns out that there are more horizontal and top-down projections than bottom-up ones. So what are they doing? If we can approximate the first 100 milliseconds or so of vision so well with bottom-up hierarchies, what are all these feedback signals doing? So this brings me to three examples that I'd like to discuss today of recent work that we've done to take some initial principles in thinking about what this feedback connections could be doing in terms of visual recognition. So I'll start by giving you an example of trying to understand the basic fundamental unit of feedback. That is these canonical computations, and by looking at the feedback that happens from V2 to V1 in the visual system. Next, I'm going to give you an example of what happens during a visual search, where we also think that feedback signals may be playing a fundamental role, if you have to do or Where's Waldo kind of task, where you have to search for objects and in the environment. And finally, I will talk about pattern completion, how you can recognize objects that are heavily occluded, where we also think that feedback signals may be playing an important role. So before I go on to describe what we're seeing the feedback from V2 to V1 maybe doing, let me describe very quickly classical work that Hubel and Wiesel did that got them the Nobel Prize by recording the activity of neurons in primary visual cortex. They started working in kittens, and then subsequently in monkeys, and discovered that there are neurons that show orientation tuning, meaning that they respond very vigorously. These are spikes, each of these marks corresponds to an action potential, the fundamental language of computation in cortex. And this neuron responds quite vigorously when the cat was seeing a bar of this orientation. And essentially, there's no firing at all with this type of stumulus in the receptive field. This was fundamental because it transformed our understanding of the essential computations in primary visual cortex in terms of filtering the initial stimulus. This is what we now describe by Gabor functions. And if you look at deep convolutional networks, many of them, if not perhaps all of them, start with some sort of filtering operation that is either Gabor filters or resembles this type of orientation that we think is a fundamental aspect of how we start to process information in the visual field. One of the beautiful things that Hubel and Wiesel did is not only to make these discoveries, but also to come up with very simple graphical models of how they thought this could come about. And this remains today one of the fundamental ways in which we think about how our orientation tuning may come about. If you recall the activity of neurons in the retina or in the LGN, you'll find what's called center surround receptive fields. These are circularly symmetric receptive fields, with an area in the center that excites the neuron, and an area in the surround that inhibits the neuron. What they conjecture is that if you put together multiple LGN cells, whose receptive fields are aligned along a certain orientation, and you simply combine all of them, you simply add the responses of all of those neurons, you can get a neuron in the primary visual cortex that has orientation tuning. This is a problem that's far from solved, despite the fact that we have four or five decades. There are many, many models of how orientation tuning comes about. But this remains one of the basic bottom-up feed-forward ideas of how you can actually build orientation tuning from very simple receptive fields. This has informed a lot of our thinking about how basic computations can give rise to orientation tuning in a purely bottom-up fashion. In primary visual cortex, in addition to the so-called simple cells, are complex cells that show invariance to the exact position or the exact phase of the oriented bar within the receptive field. And that's illustrated here. So this is a simple cell. So this simple cell has orientation tuning, meaning that it responds more vigorously to this orientation than to this orientation. However, if you change the phase or the position of the oriented bar within the receptive field, the response decreases significantly. In contrast to this complex cell that not only has orientation tuning, meaning that it fires more vigorously to this orientation than to this one, but also has phase invariance, meaning that the response is more or less the same way, regardless of the exact phase or the exact position of the stimulus within the receptive field. And again, the notion that they postulated is that we can build these complex cells by a summation of activity or multiple simple cells. So again, if you imagine now that you have multiple simple cells with different receptive fields that are centered at these different positions, you can add them up, and create complex cells. These fundamental operations of simple and complex cells and primary visual cortex can be somehow traced to the root of a lot of the bottom-up hierarchical models. A lot of the deep convolutional networks today essentially have variations on these kind of themes, of filtering steps, nonlinear computations that give you invariance, and a concatenation of these filtering and invariance steps along the visual hierarchy. So in following up with this idea, I would like to understand the basics of what's the kind of information that's provided when you have signals from V2 to V1. To do that, we have been collaborating with Richard Born at Harvard Medical School, who has a way of implanting cryo loops. This is a device that can be implanted in monkeys in areas V2, and V3, lower the temperature, and thus reduce or essentially eliminate activity from areas V2 and V3. So that means that we can study V1 without activity in area V2 and V3. We can study V1 sans feedback. So this is an example of recordings of a neuron in this area. This is the normal activity that you get from the neuron. Here is when they present a visual stimulus. This is a spontaneous activity. Each of these dots corresponds to a spike. Each of these lines correspond to a repetition of the stimulus. This is a traditional way of showing raster plots for neuron responses. So you see that this is a spontaneous activity. You present the stimulus. There's an increase in the response of this neuron, as you might expect. Actually, I'm sorry. This actually starts here. So this is the spontaneous activity, this is the response. Now here, they turn on their pump. They start lowering the temperature. And you see within a couple of minutes, they essentially significantly reduce the responses. The largely silence-- not completely-- but largely silence activity in areas V2 and V3. And these are reversible, so when they turn the pumps off, activity comes back in. So the question is, what happens in primary visual cortex when you don't have feedback from V2 and V3. So the first thing they have characterized is that some of the basic properties of V1 do not change. It's consistent with the simple models that I just told you, where the orientation tuning in the primary visual cortex is largely dictated by the bottom-up inputs, by the signals from the LGN. The conjecture from that would be that if you silence V2 and V3, nothing would happen with orientation tuning in primary visual cortex. And that's essentially what they're showing here. These are example neurons. This is showing orientation selectivity. This is showing direction selectivity, what happens when you move an oriented bar within the receptive field. So this is showing the direction. This is showing the mean normalized response of a neuron. This is the preferred direction, and direction orientation that gives a maximum response. The blue curve corresponds to when you don't have activity in V2 and V3. Red corresponds to their control data. And essentially, the tuning of the neuron was not altered. The orientation preferred by this neuron was not altered. The same thing goes for direction selectivity. So the basic problems of orientation tuning and direction selectivity did not change. Let me say a few words about the dynamics of the responses. So here, what I'm showing you is the mean normalized responses as a function of time. Time 0 is when the stimulus is turned on. As I told you already, by about 50 milliseconds or so, you get a vigorous response in primary visual cortex. And if we compare the orange and the blue curves, we see that this initial response is largely identical. So the initial response of these V1 neurons is not affected by the absence of feedback from V2. We start to see effects, we start to see a change in the firing rate here. Largely at about 60 milliseconds or so after presentation. So in a highly oversimplified cartoon, I think of this as a bottom-up Hubel and Wiesel like response, driven by LGN. And signals from V2 to V1 coming back about 10 milliseconds later. And that's when we started seeing some of these feedback related effects. I told you that some of the basic properties do not change. We interpret this as being dictated largely by bottom-up signals. The dynamics do change. The initial response is unaffected. The later part of the response is affected. I want to say one thing that does change. And for that, I need to explain what an area summation curve is. So if you present the stimulus within the receptive field of a neuron of this size, you get a certain response. As you start increasing the size of this stimulus, you get a more vigorous response. Size matters. The larger, the better-- to a point. There comes a point where it turns out that the response of the neurons starts decreasing again. So larger is not always better. A little bit larger is better. This size has an inhibitory effect overall on the response of the neuron. This is called surround suppression. And these curves have been characterized in areas like primary visual cortex. Also in earlier areas for a very long time. It turns out that when you do these type of experiments in the absence of feedback, the effect of surround suppression does not disappear. That is, you still have a peak in the response as a function of a stimulus size. But there is a reduced amount of surround suppression. That is, when you don't have feedback, there's less suppression. You have a larger response for bigger stimulus. So we think that one of the fundamental computations that feedback is providing here is this integration from multiple neurons in V1 that happens in V2. And then inhibition to activity of neurons in area V1 to provide some of the suppression. This is partly the reason why our neurons are not very excited about a uniform stimulus, like a blank wall. Our neurons are interested in changes, and part of that, we think, is dictated by this feedback from V2 to V1. We can model these center surround interactions as a ratio of two Gaussian curves, two forces. One is the one that increases the response. The other one is a normalization term that suppresses the response when the stimulus is too large. There's a number of parameters here. Essentially, you can think of this as a ratio of Gaussians, ROGs. There's a ratio of two Gaussian curves. One dictating the center that responds. The other one, the surround response. And to make a long story short, we can feed the data from the monkey with this extremely simple ratio of Gaussian's model. And we can show that the main parameter that feedback seems to be acting upon is what we call Wn-- that is this normalization factor here. So that the tuning factor that dictates the strength of the surrounding division from V2 to V1-- we think that's one of the fundamental things that's being affected by feedback. So we would think of this as the gain. We think of this as the spatial extent over which the V2 can exert its action on primary visual cortex. We think that's the main thing that's affected here. This type of spatial effect may be important in other role that has been ascribed to feedback, which is the ability to direct attention to specific locations in the environment. I want to come back to this question here, and ask, under what conditions, and how can a feedback also provide important features specific signals from one area to another. And for that, I'm going to switch to another task, another completely different prep, which is the Where's Waldo task-- the task of visual search. How do we search for particular objects in the environment. And here, it's not sufficient to focus on a specific location, but we need to be able to search for specific features. We need to be able to bias our visual responses for specific features of the stimulus that we're searching for. So this is a famous sort of Where's Waldo task. You need to be able to search for specific features. It's not enough to be able to send feedback from V2 to V1, and direct attention, or change the sizes of the receptive fields, or the direct attention to a specific location. Another version that I'm not going to talk about of visual that has a related theme that relates to visual search is feature based attention, when you're actually paying attention to a particular face, to a particular color, to a particular feature that is not necessarily located, and to space, as our friend here has studied quite significantly. People always like to know the answer of where he is at. OK. So let me tell you about a computational model and some behavioral data that we have collected to try to get at this question of how feedback signals can be relevant for visual search. This initial part of this computational model is essentially the HMAX type of architecture that has been pioneered by Tommy Poggio and several people in his lab, most notably, people like Max Riesenhuber and Thomas Serre. I was thinking that by this time, people would have described this in more detail. I'm going to go through these very quickly. Again, today in the afternoon, we'll have more discussion about this family of models. So these family of models essentially goes through a series of linear and non-linear computations in a hierarchical way, inspired by the basic definition of simple and complex cells that I described in the work of Hubel and Wiesel. So basically, what these models do is they take an image. These are pixels. There's a filtering step. This filtering step involves Gabor filtering of the image. In this particular case, there are four different orientations. And what do you get here is a map of the visual input after this linear filtering process. The next step in this model is a local max operation. This is pooling neurons that have similar identical feature preferences, but slightly different scale in the receptive fields. Or slightly different positions in their receptive fields. And this max operation, this non-linear operation is giving you invariance to the specific feature. So now you can get a response to the same feature, irrespective of the exact scale or the exact position within the receptive field. These were labeled S1 and C1, initially in models by Fukushima. And this type of nomenclature was carried on later by Tommy and many others. And this is directly inspired by the simple and complex cells that I very briefly showed you previously in the recordings of Hubel and Wiesel. These filtering and max operations are repeated throughout the hierarchy again and again. So here's another layer that has a filtering step and a nonlinear max step. In this case, this filtering here is not a Gabor filter. We don't really understand very well what neurons in V2 and V4 are doing. One of the types of filters that have been used and that we are using here is a radial basis function, where the properties of a neuron in this case are dictated by patches taking randomly from natural images. All of this is purely feed-forward. All of this is essentially the basic ingredient of the type of convolutional networks that had been used for object recognition. You can have more layers. You can have different types of computations. The basic properties are essentially the ones that are described briefly here. What I really want to talk about is not the former part, but this part of the model. Now I ask you, where's Waldo, you need to do something, you need be able to somehow look at this information, and be able to bias your responses or bias the model towards regions of the visual space that have features that resemble what you're looking for. Your car, your keys, Waldo. So the way we do that is first, in this case, I'm going to show you what happens if you're looking for the top hat here. So first, we have a representation in the model of the top hat. This is the hat here. And we have a representation in our vocabulary of how units in the highest echelons of this model represent this hat. So we have a representation of the features that compose this object at a high level in this model. We use that representation to modulate, in a multiplicative fashion, the entire image. Essentially, we bias the responses in the entire image based on the particular features that we are searching for. This is inspired by many physiological experiments that have shown that to a good approximation, this type of modulation in feature based attention has been observed across different parts of the visual field. That is, if you're searching for red objects, neurons that like red will enhance their response throughout the entire visual field. So have the entire visual field modulated by the pattern of features that we're searching for in here. After that, we have a normalization step. This normalization step is critical in order to discount purely bottom-up effects. We don't want the competition between different objects to be purely dictated by which object is brighter, for example. So we normalize that after modulating that with the features that we are searching. That gives us a map of the image, where each area has been essentially compared to this feature set that we're looking for. And then we have a winner take all mechanism that dictates where the model will pay attention to, or where the model will fixate on first. Where the model thinks that a particular object is located. OK so what happens when we have this feedback that's feature specific, and that modulates the responses based on the targets object that we're searching for. In these two images, either in objects arrays or when objects are embedded in complex scenes, we're searching for this top object. And the largest response in the model is indeed in the location of where the object is. In these other two images, the model is searching for this accordion here. And again, the model was able to find that by this comparison of the features with the stimulus. More generally, these are object array images. This is the number of fixations required to find the object in this object array images. So one would correspond to the first fixation. If the model does not find the object in the first location, there's what's called inhibition of return. So we make sure the model does not come back to the same location, and the model will look at the second best possible location in the image. And it will keep on searching until it finds the object. So the model performs in the first fixation at 60% correct. And eventually, after five fixations, it can find the object almost always right in here. This is what you would expect by random search. If you were to randomly fixate on different objects, so the model is doing much better than that. And then for the aficionados, there's a whole plethora of purely bottom-up models that don't have feedback whatsoever. This is a family of models that was pioneered by people like Laurent Itti and Christof Koch. These are saliency based models. Although you cannot see, there are a couple of other points in here. All of those models cannot find the object either. It's not that these objects that we're searching for are more salient, and therefore, that's why the model is finding them. We really need something more than just bottom-up pure saliency. We did a psychophysical experiment. We asked, well, this is how the model searches for Waldo. How will humans search for objects under the same conditions. So we had multiple objects. Subjects have to make a saccade to a target object. To make a long story short, this is the cumulative performance of the model and the number of fixations under these conditions, and the model that's reasonable in terms of how well humans do. This is data from every single individual subject in the task. I'm going to skip some of the details. You can compare the errors that the model is making. How consistent people are with themselves with respect to other subjects. How good it is with respect to humans. The long story is the model is far from perfect. We don't think that we have captured everything we need to understand about visual search. Some people alluded to before, for example, the notion that the model doesn't have these major changes with eccentricity, and the fovea, and so on. A long way to go, but we think that we've captured some of the essential initial ingredients of visual search. And that this is one example of how visual feedback signals can influence this bottom-up hierarchy for recognition. I want to very quickly move on to a third example that I wanted to give you of how feedback can help in terms of visual recognition. What are other functions that feedback could be playing. And for that, I'd like to discuss the work that Hanlin did here, and also, Bill Lotter in the lab, in terms of how we can recognize objects that are partially occluded. This happens all the time. So you walk around and see objects in the world. You can also encounter objects where you can only find partial information, and you have to make pattern completion. Pattern completion is a fundamental aspect of intelligence. We do that in all sorts of scenarios. It's not just restricted to vision. All of you can probably complete all of these patterns. We use pattern completion in social scenarios as well, right? You make inferences from partial knowledge about their intentions, and what they're doing, and what they're trying to do, OK? So we want to study this problem of how you complete pattern, how you extrapolate from partial limited information in the context of visual recognition. There are a lot of different ways in which one can present partially occluded objects. Here are just a few of them. What Hanlin did was use a paradigm called bubbles that's shown here. Essentially, it's like looking at the world like these. You only have small windows through which you can see the object. Performance can be titrated to make the task harder or easier. So if you have a lot of bubbles, it's relatively easy to recognize that this is a toy school bus. If you have only four bubbles, it's actually pretty challenging. So we can titrate performance on the difficulty of this task. Very quickly, let me start by showing you psychophysics performance here. This is how subjects perform as a function of the amount of occlusion in the image as a function of how many pixels you're showing for these images. And what you see here is that with 60% occlusion, performance is extremely high. Performance essentially drops to chance level when the object is more and more occluded. There is a significant amount of robustness in human performance. For example, you have a little bit more than 10% of the pixels in the object, and people can still recognize them reasonably well. So this is all behavioral data. Let me show you very quickly what Hanlin discovered by doing invasive recordings in human patients while the subjects were performing this recognition of objects that are partially occluded. It's illegal to put electrodes in the human brain in normal people, so we work with subjects that have pharmacological intractable epilepsy. So inside of subjects that have seizures, the neurosurgeons need to implant electrodes in order to localize the seizures. And B, in order to ensure that when they do a resection, and they take out the part of the brain that's responsible for seizures, that they're not going to interfere with other functions, such as language. These patients stay in the hospital for about one week. And during this one week, we have a unique opportunity to go inside a human brain, and record physiological data. Depending on the type of patient, we've used the different types of electrodes. This is what some people refer to as ECoG electrodes. Electrocorticographic signals. These are field potential signals, very different from the ones that I was showing you in the little spikes before. These are aggregate measures, probably of tens of thousands, if not millions of neurons, where we have very, very high temporal resolution at the millisecond level, but very poor spatial resolution, only being able to localize things at the millimeter level or so. With these, we can pinpoint specific locations within about approximately one millimeter, but have very high signal to noise ratio signals that are dictated by the visual input. An example of those signals is shown here. These are intracranial field potentials as a function of time. This is the onset of the stimulus. And these 39 different repetitions, when Hanlin is showing this unoccluded face, we see a very vigorous change, quite systematic from one trial to another. All of those gray traces are single trials, similar to the raster plot that I was showing you before. So now I'm going to show you a couple of single trials. We're showing individual images where objects are partially occluded. In this case, there's only about 15% of the pixels of the face that are being shown. And we see that despite the fact that we're covering 85%, more or less, of that image, we still see a pretty consistent physiological signal. The signals are clearly not identical. For example, this one looks somewhat different. There's a lot of our ability from one to another. But again, these are just single trials showing that there still is selectivity for these shape, despite the fact that we are only showing a small fraction of this thing. These are all the trials in which these five different faces were presented. Each line corresponds to trial. These are raster plots. As you can see, the data are extremely clear. There's no processing here. This is raw data single trials. These are single trials with the partial images. You again can see there's a vigorous response here. The responses are not as nicely and neatly aligned here, in part because all of these images are different. All of the locations on the models are different. As I just showed you, there's a lot of variability here. If you actually fix the bubble locations-- that is, you repeatedly present the same image multiple times still in pseudorandom order, but the same image, you see that the signals are more consistent. Not as consistent as this one, but certainly more consistent. Again, very clear selective response tolerant to a tremendous amount of occlusion in the image. Interestingly, the latency of the response is significantly later compared to the whole images. So if you look at, for example, 200 milliseconds, you see that the responses started significantly before 200 milliseconds for the whole images. All of the responses here start after 200 milliseconds. We spent a significant amount of time trying to characterize this and showing that pattern completion, the ability to recognize objects that are occluded, involves a significant delay at the physiological level. If you use the purely bottom-up architecture and tried to do this in silico-- this bottom-up model does not perform very well. The performance deteriorates quite rapidly when you start having significant occlusion. I'm going to skip this and just very quickly argue about some of the initial steps that Bill Lotter has been doing, trying to add recurrency to the models. Trying to have both feedback connections as well as recurrent connections within each layer to try to get a model that will be able to perform pattern completion, and therefore, use these feedback signals to allow us to extrapolate from previous information about these objects. Bill will be here Friday or Monday, I'm not sure. So you should talk to him more about these models. Essentially, they belong to the family of HMAX. They belong to a family of convolutional networks, where you have filter operations, threshold, and saturation pooling on normalization. Jim will say about this family of models today in the afternoon. These are purely bottom-up models. And what Bill has been doing is other than recurrent and feedback connections, retraining these models based on these recurrent and feedback connections, and then comparing their performance with human psychophysics. So this is the behavioral data that I showed you before. This is the performance of the feedforward model. This is the recurrent model that was able to train. Another way to try to get out whether feedback is relevant for pattern completion is to use with backward masking. Backward masking means that you present an image, and immediately after that image, within a few milliseconds, you present noise. You present a mask. And people have argued that masking essentially interrupts feedback processing. Essentially, it allows you to have a bottom-up flow of information-- stops feedback. I don't think this is quite extremely rigorous. I think that the story is probably far more complicated than that. But to a first approximation, you present a picture, you have a bottom-up stream, you put a mask, and you interrupt all the subsequent feedback processing. So if you do that at the behavioral level, you can show that when stimuli are masked, particularly if the interval is very short, you can significantly impair pattern completion performance. So if the mask comes within 25 milliseconds of the actual stimulus performance in recognizing these heavily occluded objects is significantly impaired. We interpreted this to indicate that feedback may be needed for pattern completion. This is Bill's instantiation of that recurrent model. Because he has recurrency now, he also has time in this models. So he can also present the image, present the mask to the model, and compare the performance of the computational model as a function of the occlusion in unmasked and the masked conditions. So to summarize this-- and there's still two or three more slides that I want to show-- I've given you three examples of potential ways in which feedback signals can be important. The first one has to do with the effects of feedback on surround suppression, going from V2 to V1. We think that by doing this type of experiments combined with the computational models to understand what are the fundamental computations, we can begin to elucidate some of the steps by which feedback can exert its role. We hoped to come up with the essential alphabet of computations similar to the filtering and normalization operations that are implemented by feedback. The second example was feedback as being able to have features that dictate what we do in visual search tasks and the last example, in both our preliminary work, trying to use feedback, as well as recurrent connections to perform pattern completion and extrapolate from prior information. So the last thing I wanted to do is just flash a few more slides about a couple of things that are happening in neuroscience and computational neuroscience that I think are tremendously exciting for people. If I were young again, these are some of the things that I would definitely be very, very excited to follow up on. So the notion that we'll be able to go inside brains and read our biological code, and eventually write down computer code, and build amazing machines is, I think, very appealing and sexy. But at the same time, it's a far cry, right? We're a long way from being able to take biological codes and translate that into computational codes. It's really extremely tragic. So here are three reasons why I think there's optimism that this may not be as crazy as it sounds. We're beginning to have tremendous information about wiring diagrams at exquisite resolution. There are a lot of people who are seriously thinking about providing us with maps about which neuron talks to which other neuron. And this was not present ever before. So we are now beginning to have detailed information that it's much higher resolution connectivity than ever before. The second one is the strength in numbers. For decades, we've been recording the activity of one neuron at a time, maybe a few neurons at a time. Now there are many different ideas and techniques out there by which we can listen to and monitor the activity of multiple neurons simultaneously. And I think this is going to be game changing for neurophysiology, but also for the possibility of reputational models that are inspired by biology. And the third one is a series of techniques mostly developed by people like Ed Boyden and Karl Deisseroth to do optogenetics, and to manipulate these circuits with unprecedented resolution. So let me expand on that for one second. This is the C. elegans. This is an intramicroscopy image of how one can categorize the circuitry. So it turns out that this pioneering work of Sydney Brenner a couple of decades ago has led to mapping the connectivity of each one of the 302 neurons. How exactly for each neuron, who it's connected with. And this is represented in that rather complex way in this diagram here. Well, it turns out that people are beginning to do these type of heroic type of experiments in cortex. So we're beginning to have initial insights about connectivity about how neurons are wired with each other at this resolution in cortex. We're nowhere near being able to have these for humans. Not even other species, mice, and so on. Not even Drosophila yet. There's a huge amount of [INAUDIBLE] and interest in the community of having a very detailed map. So the question for you for the young and next generation, what are we going to do with these maps. If I give you a fantastic detailed wiring diagram of a chunk of cortex, how is that going to transform our ability to make inferences, and build new computational models. The second one has to do with our ability to start the recording for more and more neurons. This is that other I didn't have time to talk about. This is work also that Hanlin did with Matias Ison and Itzhak Fried. These are recordings of spikes from human cortex, again, in patients that have epilepsy. I'm just flashing this slide because I had it handy. These are 300 neurons. This is not a simultaneously recorded population. These are cases where we can record from a few neurons at a time using micro wires now. This is different from the type of recording that I showed you before. These are actual spikes that we can record. And these 380 neurons is in a different task. So recording from these 318 neurons took us about three to four years of time. There are more and more people that are using either two photon imaging and/or massive multielectrode arrays that are beginning to be able to record the activity of hundreds of neurons simultaneously. My good friend and crazy inventor, Ed Boyden, believes that we will be able to recover from 100,000 neurons simultaneously. Of course, he is far more grandiose than I am, and he can think big at this kind of scale. But even to think about the possibility of recording from 1,000 or 5,000 neurons simultaneously so that in a week or a month, one may be able to have a tremendous amount from a very large population. This is going to be transformative. Three decades ago in the field of molecular biology, people would sequence a single gene, and they would publish the entire sequence-- ACCGG-- and so on. That was the whole paper. A grad student would spend five years just sequencing a single gene. Now we have the possibility of downloading genome by advances in technology. I suspect that a lot of our recordings will become obsolete. We'll be able to listen to the activity of thousands of neurons simultaneously. And again, it's for your generation to think about how this will transform our understanding of how quick we can read biological codes. In the unlikely event that you think that that's not enough, here's one more thing that I think is transforming how we can decipher biological codes. And that's again, Ed Boyden using techniques that are referred to as optogenetics, where you can manipulate the activity of specific types of neurons. I flashed a lot of computational models today. A lot of hypotheses about what different connections may be doing. At some point, we will be able to test some of those hypotheses with unprecedented resolution. So if somebody wanted to know what is this neuron V2, what kind of feedback its providing, we may be able to silence only neurons in V2 that provide feedback to V1 in a clean manner without affecting, for example, all of the other feed-forward processes, and so on. So the amount of specificity that can be derived from these type of techniques is enormous. So that's all I wanted to say. So because we have very high specificity in our ability to manipulate circuits, because we'll be able to record the activity of many, many more neurons simultaneously, and because we'll have more and more detailed diagrams, I think that the dream of being able to read out and decode biological codes, and translate those into competition codes is less crazy than it may sound. We think that in the next several years and decades, smart people like you will be able to make this tremendous transformation and discover specific algorithms about intelligence by taking direct inspiration from biology. So that's what's illustrated here. We'll be happy to keep on fighting. Andrei and I will fight. We will be happy to keep on fighting about Eva and how amazing she is and she isn't. What I try to describe is that by really understanding biological codes, we'll be able to write amazing computational code. I put a lot of arrows here. I'm not claiming QED. I'm not saying that we solve the problem. There's a huge amount of work that we need in here. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Seminar_3_Jessica_Sommerville_Infants_Sensitivity_to_Cost_and_Benefit.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality, educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JESSICA SOMMERVILLE: So this morning we heard Laura give a really beautiful overview of her research program, and my talk today is going to be a little bit different. I'm going to talk about some things that are going to be highly related to what Laura was talking about, particularly at the end of her talk. But because this is like brand new hot off the press work, some of it's actually ongoing as you'll see as the talk unfolds. I'm going to be focusing on a more kind of specific detail level. So I may tell you about four different studies, three of which are completed, one of which is ongoing. And they all have to do with infants' sensitivity to costs and benefits. OK. So, as we all know, cost benefit analyses are really central to the decisions we make at both a conscious and an unconscious level. And, of course, there's all kinds of different ways that we make decisions, right? But one of the things that we often do when we're making a decision is we think about what rewards do we anticipate from following a particular course of action, and how do those compare to the costs that we'll incur from performing that same action. And we will act in a sense to try to maximize the value that we get out of a particular choice. That's true of simple decisions like this, right? This woman deciding what she's going to eat for dessert. And we can also apply these analyses to more complex decisions, like things like where we're going to go to college, what kind of career are we going to pursue, where are we going to live. OK. One of the things that we heard from Laura this morning is that cost benefit analyses don't just apply to her own behavior and her own decision making. She showed some really neat evidence that these types of analyses form the basis for the inferences that we make about other people and that we make about their behavior. So that raises the very interesting question of the developmental origins of these types analyses. And that's what I'm going to talk about today, infants' sensitivity to costs and benefits. My talk is going to kind of have two different parts. In the first part, I'm going to be talking about cases where infants are observing other people. And what the question that I'm asking there is do infants-- are they able to register the costs that are behind other people's actions? And then in the second part of my talk, I'm actually going to switch gears, and I'm going to talk about infants' registration and minimization of costs, the registration of benefits, to guide their own behavior, their own decision making. I'm going to talk about a particular test case. And that's the test case of infants' prosocial behavior. OK. And I want to be kind of specific here. So across all of these studies, I'm talking about cost in one particular way. Of course, there's all kinds of ways you could operationalize cost. But what we've been focusing on so far is physical effort, the physical effort behind an action as a cost. Why would we start here? Well, there's several different reasons, right? And the basic kind of evolutionary level, it's really important that we can register and that we can minimize energetic costs, effortful costs, right? Our very survival depends on that. We have to metabolically and energetically budget, or we won't stick around, right? So that gives us good reason that that's a good starting place in terms of looking at young infants' ability to register costs and minimize cost potentially. Another reason is that for decades, scholars have really given a central role to effort in decision-making. And this dates back to the 1940s, the whole in Solomon who postulated the law of least effort. So the idea here is that if there are two lines of actions that lead to equal rewards, we're going to take the path of least resistance, right? We're going to seek to minimize effort. And then, finally, in more contemporary work that's looked at cost-benefit decision-making, both in adults and in nonhuman animals, a lot of this work comes from the neuroimaging literature. Effort has been a fairly heavily studied cost. So this is a good starting place, because we have a pretty good understanding of how effort and various benefits or reward are integrated at both the neural and a behavioral level, at least for nonhuman animals and for human adults. OK. So the first question we might ask is, what is the existing evidence in terms of the question of whether infants have a basic ability to register cost behind actions. And there's really two different ways we could pose this question. We can think about the question with respect to infants own behavior, their own actions. Is there evidence that infants will act to recognize costs and to minimize costs in their own behavior. The other way that we can ask the question is in terms of infants observation of other people's behavior. Do they recognize the cost behind other people's actions. You know, surprisingly there really hasn't been a lot of work that's looked directly at this when we're talking about infants own behavior, their own decision making. There is some work from the weight perception literature that's looked at how infants interact with blocks of different ways. And so what people will do in these studies is they'll present little babies, nine-month-old infants, with two blocks that look virtually identical. They only differ from one another in terms of their respective weights. And what these studies have shown is if you give infants a choice between these two objects, they'll systematically prefer the light object over the heavy object. So one way to think about these findings is what infants are doing here is exactly what we're interested in, they're minimizing the physical cost, right? They're taking the past of-- the path of least effort. One challenge, though, for interpreting these finding is oftentimes in these studies, the heavy blocks that are being used are beyond infants lifting capacity. So what that means is it's hard to know if these results are about infants registering cost per se, or if what we're really getting at is just infants repeating a sort of successful interaction with an object that they've acted on previously. OK. So Laura talked a little bit about this in her talk today. What about the evidence of registration of costs and other people's actions. And one of the things that Laura mentioned that we know from many, many different studies, is that infants appear to expect efficiency in other people's actions. Laura showed you one example of that. I'll show you a different example of that, which comes from actually a study that Liz did with one of her graduate students. So here we see someone who's reaching over a barrier in order to get an object. The barrier is then removed. Infants have the expectation that that person is going to reach directly for the object, right, rather than performing that funny arcing motion. So, again, one way to think about these findings is that what infants are doing is they're expecting that the person is going to take the least costly action, right? And, in fact, there's all kinds of costs to this particular arcing motion, right? It's indirect, it probably takes longer, it's probably more difficult, it's more effortful. And so we wanted to ask the question, too, to begin with if infants were able to register costs. But unlike in this situation where there's multiple potentially redundant cues that this arching motion is a costly behavior, we wanted to really kind of focus in on situations in which there aren't a lot of overt observable cues to the costs underlying another person's action. So the way that we did this is we showed infants different actions that look similar on their surface level, but these actions differ in terms of the degree of physical effort that are required to perform them, and the way that we achieve that is by having infants watch people lift objects of different weights, right? So, obviously, heavy weighted objects are harder to lift. They're more effortly costly than lifting a light object. And what we wanted to know is can infants recognize under conditions where they have really minimal, observable cues, no cues about, for example, straining or sweating or things like that that might be really obvious for figuring out the effort. Will they be able to understand that when someone is lifting a heavy block, that's a more effortful action than when they're lifting a light block. In addition to that kind of primary question, we were also interested in whether this ability in infants might be individually variable. So this might come as a surprise to you, but infants, like adults, are individually variable. And they're individually variable, of course, in many ways. But one way in which they vary from one another is in terms of how strong they are, right? Just like human adults, right? We're variable in terms of how strong we are. And we have an idea, or hypothesis, that infants individual differences in strength might actually be important for registering the effort-related costs behind these different lifting actions with different weighted objects. One of the reasons that they might be important is because of core strength gates the type of experience that infants are going to have in their everyday life, right? If you're a strong baby, you can lift heavier objects than a weak baby. You can also lift a wider contrast of range of objects, right? So we thought maybe there's something about individual differences in strength, particularly, for babies who are stronger where they'll be better at recognizing the differential effort that goes along with lifting actions when you're talking about blocks of different weight. OK. So let me tell you about the study that we conducted to ask this question. We tested 12-month-old infants in this study. They took part in a turn taking procedure where we recorded EEG, electrical activity from the brain as it propagates the scalp. And in the course of this task, what they did is they took part in different types of trials. On observation trials, they would watch an experimenter who would lift these different objects or these blocks, and these blocks looked perceptually identical in terms of size and shape. They were different colors so infants could individuate them and keep track of them, but what varied from trial to trial was exactly how much the objects weighed. So they range from being the weight of a typical bath toy to being quite heavy. So infants can lift the heaviest blocks, but they're pretty effortful in order for infants to be able to lift them. In the trials where they watched an experiment or interact with these blocks, the experimenter would do things like put the block up on a platform, drop it into a bucket after full type of actions where you can sort of register at least in principle the type of effort behind the action, and then infants would also have the opportunity to act. They could perform the same actions with objects. Infants also for measurement purpose received baseline trials where we're just registering EEG in response to abstract images like a checkerboard pattern, for example. OK. So what are we interested in here? So, in this particular study, we were looking at the suppression of a particular oscillatory frequency called sensorimotor alpha, or some people call it mu attenuation. So we know that at rest, neurons in sensorimotor cortex fire spontaneously and they fire in synchrony. And what that means is we get these large amplitude EEG oscillations in the alpha frequency band. When the motor cortex is activated in remote motor cortex is activated, and that happens, of course, when we act. It also happens when we watch other people act. What you see is you see a suppression in sensorimotor alpha. So many people recently have been interested in suppression of sensorimotor alpha new attenuation from the perspective of looking at the mirror neuron system. More broadly and for our purposes, we're really just thinking about this as a measure of sensorimotor cortex activation. So greater suppression equals more sensorimotor cortex activation. And our question here was whether infants activation sensorimotor cortex wouldn't vary as a function of watching people lift blocks of different weights. Would you get greater activation when people were lifting heavier objects? Which would, of course, be a sign that infants were distinguishing between different actions on the basis of effort. OK. So in addition to looking at that, we also gave infants a group strength assessment. OK. So let me tell you a little bit about the grip strength measure, only because it took us several years to come up with this so I feel like I need to talk about it a little bit. So we wanted to measure infant strength, and the way that we did that is by measuring infants' grip strength. But, of course, the challenge here, if you're an adult, right, and you want to measure an adult's grip strength, you just get something called the dynamometer, you have an adult squeeze a bulb or squeeze the hand grip and then you get this nice force reading from that, right? And that all works very smoothly with adults but, of course, you can't just hand that to an infant and say, squeeze as hard as you can, that doesn't work, obviously. So what we did here is we had an experimenter who had a toy, the infant had the same toy. The experimenter would squeeze her toy, and what we hoped is that this would motivate infants, or lead infants, to squeeze their toy. Their toy, in contrast to the experimenter's toy, had both a hidden sensor embedded within the toy, which led to playing Old McDonald, which, of course, infants greatly like, right? They find that very enjoyable. And it also had a hidden pressure sensor within it. So we were able to get-- or to measure how hard infants squeezed the toy. Now, the trick here was we want to get infants strongest squeeze, right? So what we did is we set up our device so that each time the infant squeezes it, they have to squeeze harder to get Old McDonald to play. And, of course, they want Old McDonald to play, right? So they're motivated to keep doing that. So that's how we record infants' maximum grip strength. We like essentially keep going as long as infants will allow us to do that, basically. There are some other things that we measured. We measured infants weight. Our motivation for doing this is that an adults' strength and weight are highly correlated. They were in our sample so that kind of helps to validate or grip strength paradigm. And there were things like general motor maturity that we measured, gross motor skills. We measured how frequently infants lift blocks within the task, because we want a control for these in our analyses. We're wanting to look specifically at the effective grip strength. OK. OK. So let me tell you a bit-- the first thing that we looked at. So I'm going to show you a series of scatter plots that look at the relation between sensorimotor alpha suppression, and infant's grip strength. And these are plotted as a function of the weight of the block. And these are when infants are observing other people. So the thing to know is we're talking about suppression, so you're looking for negative scores. More negative scores mean more suppression. And we had a particular hypothesis about how this would go, or idea about how this would go. We thought that when the blocks were relatively light, grip strength would be less of a good predictor of sensorimotor alpha suppression. And the reason for thinking that is that whether an infant is relatively strong or relatively weak, they all probably have lifetime experience lifting relatively light objects. However, where strength really comes into play is as objects get heavier, right? So stronger infants, very likely, have a greater lifetime history of lifting heavier objects. So our prediction was that these two things would be increasingly tightly integrated as block weight goes up. And, in fact, that's exactly what we found. So there's weak relations when the block is light, when it's that heavy block-- we call them the heavy and the super heavy block-- there's a tighter relation. And you see the strongest relation here when the block is extremely heavy. And these analyses control for things like infants in task lifting experience, their weight, their motor development scores. So the next thing we wanted to know is we wanted to know whether there was any evidence that suppression of sensorimotor alpha would be greater in cases in which the block is heaviest versus the block is lightest. So this is really our index, or our measure, of whether infants are differentiating when they're watching other people act on objects, whether they're differentiating the degree of effort that goes along with lifting the object as a function of block weight. So what I'm going to show you is change scores. So more negative means that you're seeing increasing sensorimotor alpha suppression for heavy versus light blocks, and these are plotted as a function of infants' grip strength. So what you can see here is that for the weaker babies, the lower grip strength babies, you're not really seeing any systematic change from the latest block to the heaviest block. But you are for the stronger infants. So what these findings suggest to us is that the stronger infants appear to be differentiating these actions on the basis of the weight of the object that the person is lifting, the weaker infants aren't. OK. So what do we know from this data. Well, we have some evidence that activation of sensorimotor cortex as indexed by suppression of sensorimotor alpha, while babies are watching other people lift blocks of different weights, varies as the function of the weight of the block, right? So this might signal that infants are able to recognize that different actions have different degrees of physical effort that go along with them. And we also see that the ability to make this distinction is tied to infants own strength, their own grip strength. And our interpretation of this is that this might have to do with strength being a rate limiter, or a facilitator, or the type of experience that infants previously have with objects of different weights. And, in particular, the stronger infants might have more experience with lifting heavier objects. They might have more contrastive experience, which allows them to better recognize, or better differentiate, the degree of physical effort that go along with different actions when you're talking about lifting objects of different weight. OK. So that's part one. So I think what these data tell us is that in this context, infants have a means of registering effort related costs. I think where they go above past data is that they tell us that infants can do this under conditions in which they have minimal behavioral cues. So we think back to that reaching action, right? There's all kinds of cues that this is a costly action, right? There's all kinds of ways that it differs from a direct reach. In this situation, we're talking about actions that are really minimally different from one another. What I want to do now is I want to switch gears and talk about kind of the flip side of the coin. And that is infants use of costs and benefits, or reward, to guide their own behavior. And I'm going to be specifically focusing on the test case of infants prosocial behavior. So many of you may know this already, but infants are highly prosocial, right? There's been a lot of studies on this recently, and all of these studies have kind of come down on the conclusion that starting in the second year of life, infants will do things like help people achieve their goals, they will share toys or objects with other people, they're comfort people in distress. But there are questions and debates that are hotly contested about early prosocial development, and I just want to bring two of them to your attention. OK. So one question is, when does infants or children's prosocial behavior become selective or strategic, right? So we know that by preschool age, early school age, children's prosocial behavior is somewhat selective, meaning that there are some people, for example, that infants are more likely to help than others, right? And children are-- not infants, children-- children are more likely to help under situations where they are perhaps reputational concerns involved. But what we don't yet know is whether this is true of very early prosocial development. So what is the developmental course like? Do kids start off being selective and strategic? Or do they only get there over time with development? That's one question. There's another related question that has to do with what is the underlying motivation for prosocial behavior, right? So the kind of generous interpretation of infants prosocial is actions, children's prosocial action, is what is going on here is infants are motivated by empathic concern, right? They care about other people's needs, they care about other people's desires. And in these experimental context, what they're doing is they're acting to meet another person's needs, they're acting out of empathic concern. But there's also other reasons why infants, or anyone, for that matter, might behave prosocially, that might have to do with social affiliation biases, social motivation, that might have to do with wanting to see a goal being completed, et cetera, right? So one of the ways that we can start to get traction on these issues is by looking at the impact of various costs on infants' prosocial behavior. And somewhat surprisingly, this is not a terribly well studied topic as of yet. So there are some studies that are both with infants and with children, where people have looked at the impact of personal cost on prosocial behavior and, usually, the way personal costs are operationalized is in terms of, let's say, an infant is tested in a paradigm where they need to help someone else, or share an object with someone else. And they might be required to give up their own object versus an object that's just sitting there in the lab, right? Presumably, their own object has higher personal cost. Now, we don't really know when personal costs start to impact infants' prosocial behavior, because there's been a lot of mixed evidence. And, particularly, in infancy, there's, as of yet, no systematic evidence that high personal costs actually reduce infants' prosocial responding. What about the question of energetic physical effort related costs? Well, again, here, this is really an understudy topic. So there's one existing study that has looked at infants helping under kind of minimal physical effort cost. And it's a little bit hard to know what to make of that study because we know that infants helping behavior is still present under those conditions, we just don't really know how it compares to conditions where the physical costs are low. OK. So we started off by asking a really super simple question about infants prosocial behavior, and that was whether the anticipated physical effort that goes along with prosocial responding influences infants prosocial responding. In particular, when the effort is high, does it increase, or rather, decrease infants prosociality. OK. So we tested 18-month-old infants in this study. Start by telling you about the critical test phase, it was a helping task. An experimenter was on the opposite side of the room. She needs a block in order to complete a tower that she's building. What happens before that is all infants take part in a training phase. They're faced with these vinyl blocks. You'll see a video clip in a moment. And these vinyl clocks have-- blocks have been rigged by us so that they range in weight. There's five of them, they're different colors, so infants can keep track of them. During training, what happens is the experimenter plays a game with an infant where they get them to drop each block into a bucket. Babies like to do that, it makes a cool noise, right? And, really, this training phase serves two purposes. The first purpose that it serves is we want infants to learn how much each block weighs. The second purpose that it serves is we want to be able to record what is the heaviest block that infants are capable of lifting. All right. In the test phase, as I told you, the experimenter's on the opposite side of the room, she's building a block tower, she needs a block to complete it. There's a single target block available to infants and what varies between our two conditions is the weight of that block. So for half of the babies, the lightest block of the training blocks is left behind. And for the other half of the babies, the heaviest block that infants are capable of lifting has been left behind. So we're contrasting effort here in terms of the weight of the block that infants have to carry across the room to help the experimenter. So we're looking at infants block retrievals. The other thing that we recorded was a parent report of infants walking experience. So these are 18-month-old infants, they can all walk. On average, they've been walking for six months so they're all experienced walkers. But there's individual variability in terms of how long they've been walking for. So why was this important? Well, here was our underlying logic. Imagine that you and a friend are going on a hike, you're both equally strong, you can both lift 60 pounds. But your friend is an expert hiker and you're a novice hiker, right? And you both have to carry a 60 pound backpack up the hill. Well, despite the fact that you might both be equally strong, if you're the novice hiker, that's probably going to be more effortful for you to get that backpack up the hill than it is going to be for your friend or your buddy. So we had a particular prediction that what we would see is a relationship between parent reporter walking experience, and infants likelihood do this-- likelihood to help the experimenter by carrying the block across the room, and that this would be selective, or at least stronger, for the high effort condition. OK. So let me show you a couple little video clips here so you can get a flavor of the procedure. This is just showing you the test phase, so it's exerted from the test phase. There's one thing I want to explain a little bit. So you can see up here there's this striped bucket here. And the reason that we have there is-- that there is because we want the experimenter to be unaware of the target block that is left behind, so they're naive to the infant's condition. It looks like from the infants' perspective that they can see the target block, but they actually can't. They don't know if they're in the high or low effort condition. So this is the baby who is tested in the low effort condition. [VIDEO PLAYBACK] I'm going to use these blocks to make a tower. These blocks can go here. This one can go here. And this block can go-- oh, no. Oh, no, I'm missing the block I need to finish my tower. I'm missing my block. Ah, oh, there it is, Joelle look. The block got moved on your blanket. Can you bring me the blocks so I can finish my tower? [END PLAYBACK] All right. Sorry, we exerted a little bit early. He goes over and he gives her the block. OK. OK. Now let's watch a baby in the high effort condition. Remember, the only difference between these two conditions is the weight of the block that's been left behind. OK. [VIDEO PLAYBACK] I'm going to use these blocks to make a tower These blocks can go here. This one can go here. And this one can go-- oh, no. Oh, no, I'm missing the block I need to finish my tower. I'm missing my block. Ah, oh. There it is, Rose. The block got left on your blanket. Can you bring me the block so I can finish my tower? [LAUGHTER] Can you bring me the block? Rose, can you bring me the block so I can finish my tower? No. [END PLAYBACK] So it's a little hard to hear what she was saying, but if you couldn't hear it, she was saying, no, thank you. So she said, no. No, thank you. No, thank you. OK. So that's a-- pretty illustrative of the procedure. OK. So what did we find? So here's infants' rates of helping in the low effort condition. In the high effort condition what you can see. So infants help much more frequently in the low effort condition than in a high effort condition. Here's what we found with respect to infants' walking experience. A walking experience, how long an infant has been walking predicts infants' likelihood of helping in the high effort condition. And what this tells us is that for each month of additional walking experience, infants are twice as likely to help. Now, what we can see here is that infants are less likely to help under high effort conditions, right? So infants prosocial behavior is influenced by the effort related costs of prosocial responding. And a critic, I guess, could say, well, maybe it's that-- it's not that infants are going by the effort, it's maybe that they're not able to help in the high effort condition. We don't think that that's the case, because infants have given us evidence that they're capable of lifting that block, right? That they're later-- later tested with in the high effort condition. But the next condition I'll show you will also kind of speak to that. And the important-- another important thing to recognize here is that infants seem to be recognizing these costs at an objective level. I think Laura called this in her talk an agent independent level, right? As a function of the circumstances of the situation, right? And they're also recognizing costs that at a more subjective level in terms of their own capabilities and how that influences the particular cost. In this case, it's their amount of walking experience, how expert a walker they are. OK. So what we wanted to do next is to find out whether if when infants are presented with these high effortful helping situations, whether infants helping behavior would vary as a function of the motivational benefits of prosocial responding. Now, it's been pretty firmly established that early prosocial responding appears to be immune to extrinsic rewards. So what that means is if you test a baby in one of these helping paradigms and you say, good job, way to go, good job. That's actually not going to increase their subsequent helping behavior. If anything, it will decrease it. But that doesn't mean that more intrinsic rewards don't influence how infants perform on these particular tasks. We know from some prior work that infants by this age have certain affiliated biases, right? They have biases for individuals who share their preferences, who share the-- who like the same things that the infant likes. They prefer to play with those people who-- than people who don't like the same thing that they like. And they also have-- possess affiliated biases for people who could be-- said to share sort of in-group member characteristics, right? So infants, like people who speak in native-- their native language over a nonnative language speaker. And, of course, these affiliated biases might have important functional consequences, right? They might be important for cultural learning. So what we wanted to know in this next study is whether we could kind of push around these intrinsic benefits for infants to see if their behavior would change under these high effort helping conditions. The way that we did this in this particular study is prior to the test procedure, the helping task, we had infants take part in this little task where they were given one-- two toys. They could choose between the two toys. This happened on three different trials, different toys each trial. Infants would make a choice. And then the experimenter would subsequently show that she liked one toy and disliked the other toy. And the really simple manipulation between these two different conditions was whether the experimenter liked the same toy as the baby, or whether she liked the other toy. So did she share the infant's preferences, or did she oppose their preferences, right? And we would think in terms of the data on infants affiliative biases, that they would prefer to interact with someone who shares their preferences. Infants took part in the same helping task as they did in the first experiment. To streamline the procedure, we used the medium block weight that infants have been capable of lifting in the first study. The other thing that we did is we added a post-test phase. So we excluded any infants who were not capable in the post-test of lifting the target block or a heavier block, right? So we know for all of these infants in the sample that they can lift the block. The question is, do they help the experimenter. OK. So we again looked at infants' helping behavior, we looked at their walking experience. The other thing that we did in the study is we looked at infants helping as a function of the response period. Whether helping occurred in the-- rates of helping in the first half of the response period versus the rates of helping overall in the response period. And our motivation for doing this is that we thought that if there are differences in the degree of motivation to help, you might see early differences in the response period, right? So early on, infants might differentiate across the conditions. But these might attenuate over time. One thing I forgot to mention is that in the course of the response period, infants receive prompts at certain intervals in order to say, can you get me the block, right? So the question is with these prompts will any early differences that we see attenuate over time? OK. So here's what we can see. This is the first half of the response period. Infants in the shared preference condition are significantly more likely to help the experimenter than infants in the nonshared preference condition, right? So when there are intrinsic rewards associated with engaging in high effort behavior, infants are more likely to help the experimenter. And the other thing that we saw is that infants walking experience significantly predicts helping behavior in the nonshared preference condition. So when the motivational benefits are low, these subjective costs seem to exert a stronger-- a stronger role, have a stronger predictive value, than when the motivational consequences-- or, sorry, motivational benefits are high. And then here, this just shows you infants' helping behavior but, now, as a function of the overall response period. You can see they're still numerically different, but they start to come together, right? So the differences are really driven by what's happening early in the response period. OK. So these findings suggest that infants willingness to engage in high effort-- high cost helping is motivated or affected by intrinsic motivational factors. Infants are more likely to carry a heavy block to help someone who shares their preference. One thing I want to point out here is that these findings help us sort of interpret what's going on in the first experiment, right? If the first experiment was explained by the fact that it's a lack of ability versus a lack of effort, we shouldn't get these findings, right? Because the effort is equivalent across these two conditions here. And what simply varies are the kind of motivational benefits. OK. So, now, I want to tell you about a study that we're just in the middle of conducting. I think it's really exciting and interesting. I'd be curious to get your thoughts. So we're literally mid data collection, but I think I have enough data to tell you what's going on so far. So, now, we're trying to expand the scope of benefits that we're looking at, right? And as we sort of alluded to earlier, we don't-- really don't know as of yet, or we don't know very much, about what cost counts as a cost and a benefit for infants, right? That's something that we actually have to figure out to determine empirically. There's been some recent studies that have shown that three- and four-year-olds, perhaps paradoxically, when they're tested in sharing tasks, are more likely to share with a rich recipient than a poor recipient, right? And that is paradoxical in the sense that, of course, the rich recipient has less of a need than the poor recipient. But, perhaps, unsurprising in the sense that there's something self-serving, right, like it might be in your self-interest to affiliate with people who have a lot of resources versus people who have few resources. In our study, what we did, is before infants got the helping task, we demonstrated to them that one individual had more resources than the other individual. It's a really simple manipulation. What they did is they saw two individuals are sitting at a table. They both had these like transparent fish bowl looking balls, and on each trial they have different goods in the fish bowls. So what happens is that one individual always has lots of stuff, right? On one trial it's animal crackers, the other trial it's these cool little balls. The third trial it's these cool, flashing little blingy rings. And the other person has very few. But they both do exactly the same thing during this first phase. What they do is they take turns-- we counterbalance everything, of course-- they pull out three objects one at a time and, they say, hey, baby, look at all my toys, right? So they're both doing the same thing, exactly the same thing. What differs is the kind of resource contacts, right? One of them has a lot of stuff. One of them only has three things, right? And we do that on repeated trials, because we're trying to give babies the impression that, generally, this person has more stuff than the other person. OK. So then we test infants in the helping task. This helping task is a little bit different than the one I just told you about, because now we're pitting these two experimenters against one another. The experimenters in the helping task have equivalent need. They both, like Miranda was doing earlier, are building this tower, right? They're missing a block, they need a block to complete their tower, right? The only thing that differentiates the experimenters is what happened previously. One person had a lot of stuff, the other person didn't have very much stuff. And the other thing that we're manipulating here is whether the infant has to engage in equally effortful actions to help the two experimenters, or whether helping one of the experimenters is more effortful. And the way that we've operationalized this so far in this study is by varying how long the baby has to walk. So, in one case, the person is within a few feet, in the other case, they're across the room. OK. So let me show you the data, first, for the equal effort condition. So these are-- this is-- what this graph shows you is who the baby is helping. And what you can see here is when effort is equivalent, infants are systematically helping the person who has more resources. They're helping the rich experimenter, right? So now the question becomes, what happens in cases of unequal effort? And as I said, this data is still coming and we're still testing this, so the condition that we started with, given that we have this initial pattern, is the condition where the rich experimenter is the one you have to walk a long way to. And the poor experimenter is the one you have to walk a short way to. And here's what we see. It flips. So I think what this suggests is that infants are both weighing the costs of the action they have to perform, and the intrinsic motivational benefits as defined by these sort of things that might be important for social affiliation. OK. So what implications does this have for infants' prosocial behavior? Well, it suggests that it may be the case that cost benefit analyses underlie infants prosocial behavior. In particular, their helping behavior. And I also want to take us back to two of the questions that I raised earlier. One was about the selective nature of early prosocial behavior, the strategic nature. And the other was about the underlying motivation. So think in terms of the selective strategic question, on the one hand, we can say, fairly early on at 18 months, infants' prosocial behavior is strategic in the sense that what they seem to be doing is minimizing costs and maximizing motivational benefits. And in terms of the motivation, I think what these findings tell us, and this is not to say that infants are never motivated by empathic concern, they're never motivated by other people's needs, but there are other things that come into play here. So infants' underlying motivation to help is influenced by a tendency to want to affiliate with particular individuals. OK. So I told you a little bit about infants registration of costs in the actions of other people, their use of costs and benefits to guide their own prosocial behavior. I just want to close by kind of raising some questions that I think we should be interested in pursuing in the future. OK. So this came up earlier, right? So one thing that we have to understand to understand infants' behavior is to understand what acts as a cost for infants and what acts as a benefit for infants. And I think the thing that's really important to point out here is it won't necessarily be what we think as adults, right? It won't be intuitive to us, right? And the classic example that we can think about is, you know, you buy your child this toy, you bring it home from the store, you're so excited to give your child this cool snazzy toy, and all they want to do is play with the box, right? So what that shows us is we're not good models, necessarily. We have to kind of determine this empirically. We can't necessarily use our own intuitions to figure this out. Another question is how are costs read, right? So in the first study I showed you that infants' experience potentially is important, or factors that gait infants' experience, right? But that's in a situation where there aren't a lot of observable cues to effort. So I think it may be the case that there's some variability here. Some costs infants may require experience with in order to figure out what is the cost, right? How much of a cost is this? And maybe there are other costs that infants can more readily read from the get go. And then, finally, I think at some point, we need to ask whether infants and, potentially, children are sensitive to other types of costs beyond effort. Also, if they have a kind of higher level category of cost. So one thing that's really interesting in the literature on nonhuman animals is that there are just so simple neural systems that are responsible for effort reward decision-making, and for delay reward decision-making. But, obviously, we, as adults, can group these things together, or I wouldn't be able to give this talk today, and Laura wouldn't have been able to give her talk today. So one of the things we have to start to understand is when did those things kind of come together in service of this larger category of cost. OK. So I'm going to stop there and ask for questions. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Tutorial_6_Tomer_Ullman_Amazon_Mechanical_Turk.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. TOMER ULLMAN: So yeah, I'm going to spend the rest of the tutorial talking about Amazon Mechanical Turk, although some of the stuff just applies to, in general, sort of making large scale experiments on people and crowds and things like that. There are other alternatives. I think Google has some in-house things. There's like crowd flower or crowd clicker or things like that. But most of the psychologists I know that are doing stuff online and large scale things use Amazon Mechanical Turk. So I'll be talking about that. This is a crowdsourcing platform which is designed to do all sorts of small tasks, not necessarily psychophysics. It was invented in around or built in around 2005. And the signature tagline for it is "Artificial artificial intelligence." So these are all sorts of tasks that you wish computers could do. You don't want to put people through it but computers can't do it yet. So let's have people do it for now. Amazon sort of invented it for in-house purposes to get rid of duplicate pages. They really wanted an algorithm to do that for duplicate postings, but there really wasn't one. So they just paid some people very little money to do a lot of these pages. And then they figured, wait a minute, a lot of companies probably want this sort of service so they offered it to the general public. Do people know what the original turk was, the original Mechanical Turk? Who doesn't know, raise their hands. OK, so Mechanical Turk, the name comes from this 18th century mechanical contraption called the turk, which was a chess playing device that supposedly ran on clockwork and could beat some of the finest minds in Europe. I think at some point it played like the Austrian duchess or whatever it was the empress and Napoleon and things like that. And it floored people, how could this work? And you know, the inventor said, well, I've invented a thinking device. And the clockwork just solves it. And a lot of people of course figured that it must be a hoax but they couldn't be sure exactly how it worked. And it was, of course, a hoax. All of these gears and boxes and clockwork is just designed to distract you from the fact that you could fit a person inside. I mean, people have thought of that, obviously, but it was cleverly designed so that you couldn't quite find it. And nobody knows for sure because the original work was destroyed before people put forward the exact hypothesis, but the consensus now is that it must have been just some person inside making the moves. At some point people have suggested it must have been a well-trained child or a small person or something like that because the compartment must be really small. That's not thinking right now. It was just some more magic, but not magic in the Harry Potter sense magic, in the stage magic sense. So, what sorts of tasks do people run on Amazon Mechanical Turk? Well actually, the majority is not psychophysics or psychologists. There are a lot of companies using Amazon Mechanical Turk to do things like, hey, garner positive reviews for us. Go to this website and write something nice about our product. Or you know, translate some text for me instead of hiring some professional to do it, most people who-- there's a lot of people on Amazon Mechanical Turk that can probably translate something for you from English to Spanish and back, English to Chinese and back, things like that. And they can do it for much less money than you would need to pay a professional translator. Another thing that may be more up your alley is, you know, you've heard a lot about supervised training and things like that, and big data sets that are used to train things like convolutional neural networks. The supervised part usually comes from somewhere. Somebody has to tag those images. Somebody has to go over a million images and say dog, not dog, dog, not dog, not dog, two dogs, one dog. You want somebody to go ahead and do that. And you don't have artificial intelligence to do it for you, yet. I mean, CNN's are getting better but someone had to tag it for them. And that's the sort of thing that you would use Amazon Mechanical Turk for. Look at this image. Tell me, is there a social interaction or isn't there? Now do that for the next 100 images. Let's get 1,000 people to do that. You get the sense of like why you would want to crowdsource this kind of problem. But there's also, you know, just psychology, psychophysics, the sort of thing that you would bring people into the lab to do but you don't want to do for various reasons. So the idea is you could collect a lot of people responding to stuff on the screen right, exactly the sort of thing that you would bring them into the lab and measure something that they're doing to a screen. You could just have them doing that to the screen at home, or wherever the hell it is that they're doing this thing. So let's see, you could do things like perception. You know, just look at this. Is it a dog? Yes or no. Or rather things like the Stroop task. If nobody had invented the Stroop task, you could do that on Amazon Mechanical Turk and get famous. I was going to say rich and famous, but just famous. You could do various attention tasks. You could do things like learning and categorization and bias. Learning things like, here's a tufa, here's a tufa, here's a tufa. Is this a tufa? I mean, it's very easy to do that on a screen. You can do that on Amazon Mechanical Turk. You could do things like implicit bias. A lot of social psychologists are interested in that sort of thing. Let's see, what else. You could do things like morality, the trolley problem. You could do things like decision making, and economics, and prisoner dilemmas, and making predictions, and tell us which one of these two movies based on trailers do you think will win the best actor award and things like that. How would you actually run something on Amazon Mechanical Turk? The first thing you would do is register as a requester. You would go to requester.amazon.-- I'll send you the slides. You can just click on that link and it'll take you to the page to register as a requester. You would then you do one of two things. You could either do the vanilla version which is to just use the Amazon template. Amazon has made it very simple for you as a requester to bring up a new experiment and just say, I want to ask a simple set of questions. Now go. And there are some parameters that you need to set and I'll show you those in a second, what they do. And if you're only interested in very simple things, like fill out this box or click on one of these two images or something like that, that's perfect and it's fine. And Amazon takes care of a lot of things for you like saving the data. You don't have to mess around with like SQL databases and things like that on your own. The other thing that you could do is to point people to an external website. Then you would have to host it somehow. The advantage there is that then you could do any sort of fanciness that you want. You could show them custom animations, have them play a game, record how they're playing that game, send them new things based on how they're playing that game. Or you could do things like, you wait until you record two people. You record two people. If you just recruit one person, it just says waiting, waiting, waiting. Now you wait for the next person. Now you have them pitted against one another in some sort of game. This has become more popular in economics. That's not the sort of thing you could do with the Amazon template. But if you're good at coding or you can hire someone that's reasonable at coding, you can do that by pointing them to an external website. And we can show some examples of that. Then once you decide-- you register as a requester. You decide which one of these things you want. You build your website, either you build the external website or you just use the Amazon template. And then you test it on a sandbox. Don't run your experiments live immediately. I'll be giving sort of tips throughout the thing. This might be redundant for some of you but not redundant for others. So there's a sandbox where you can just run it and get sort of false responses that don't count exactly where people can sort of fill it in. You might want to used that before you go live with 1,000 people and say, oh god, I miscoded that variable and nothing is working. So test it ahead of time. And then, once you're finally, finally done, you would submit it. You would just click, you know, submit this thing. You would pay the money to Amazon. And you also want to announce it on several Mechanical Turk forums, which I'll get to in the end. These are very helpful people. They're very nice to you if you're nice to them. And it gets you a lot more results very fast. OK, let's see. Why don't I show you an example-- let me show you an example of what a requester page looks like, just to get a sense of it for those of you who have not seen this sort of thing before. You can see our experiment is almost finished. I asked for 100 people on that. So let's go to something like create. So this is what the requester page looks like. Here's all sorts of projects that I run on Amazon Mechanical Turk. And you would do something like new or you would copy something from something old that you already did. Let's just edit that and show you some examples of what you can do. You give a name, you know, a title for your own internal title, like AI estimate. You then give a title that Amazon Mechanical Turkers see, something like artificial intelligence estimate, short psychology study. There, you probably want to give a time estimate. Turkers would prefer it if you give them a time estimate. They care about their time. They care about their money. They do like doing psychology. They don't like filling in endless bubbles, you know, the standard psychology things where you rate 100 things like, I feel this way or that way. Don't do that. But the sort of fun psychology they're actually on board with. It's much more fun than writing show reviews. You might want to give it a nice title that will entice them, an honest description, like you know, you'll answer a few simple questions on you'll watch a movie and then answer two questions or things like that. Key words like easy, fun, something descriptive about the task, like AI, short. Again, this is sort of luring, and it's good to do that if you're honest about it. Some people do things like easy, fun, 100 pages of filling in bubbles and things like that. Don't do that. They publish it in the forums, like, this is a lie. Don't do that. This is where you say how much you want to pay per assignment and we'll get to how much you should pay per assignment. You'll notice it's very little. You're paying these people very little. How much assignments you want per hit. Hit is just the name for your task. How much time you are allotting them per assignment. So someone has accepted your hit, now how long do they have to carry it out. You might think, oh, my task only takes two to three minutes, so only give people two to three minutes. I don't want them like taking the hit and then going and drinking some coffee and then coming back to it or something like that. Consider still giving them a whole bunch of time, because a lot of the time, you'll find that people if they see that there's only a five minute mark on it, and if they don't completed in five minutes they're sort of concerned, like, what is this thing? And if I won't get done through it in five minutes, you know, I'll get disqualified or something like that. Give them some time. Give them more than the ample time to finish this thing. You can, yourself, keep around some timer within the external website or something like that, or they actually actively participating right now. How long will this hit stay up for? And auto approve, and things like that. I'll talk a little bit about rewards and incentives and things like that. Obviously, they care a lot about things like money. They care a lot about things like how much you're going to pay them. They care about doing it quickly so that they can move on and get more money. They care about it being somewhat fun, but that's not such a big deal for them. And they care about getting it approved quickly. OK, so you as a requester, you will get reviewed on various forums and things like that. If you get bad reviews, people don't want to do your things. One of the things that people care a lot about is something like getting approved quickly. And quickly can be in a day or two, something like that. They don't want to have to wait two weeks for that $0.50 that you were supposed to give them. You can do that if you want to. You have the power as the requester. But if you want to incentivize people, try to make sure that you approve them quickly and let them know that you're going to approve them quickly. So that was just a very general statement. So who are the sort of people that are on Amazon Mechanical Turk? Have people read these sort of papers and things like that? Do you know more or less? Some of you do. Some of you don't. The use in India make up about 80%-- by the way, this is in flux. Like a study came out two years ago about this sort of thing. It's changed since then. But the study two years ago, the US and India make up about 80%, with the US taking up 50-something percent, India taking up the rest. There are slightly more females than males on Amazon Mechanical Turk, and at least the US population is biased towards young and educated people, educated meaning a bachelor's degree. It's certainly more representative of the general population than just hiring, you know, college students during their bachelor's degree. But it is still skewed, keep that in mind. It's skewed towards, basically, the sort of population that you would expect to find on the internet in the United States. OK, so somewhat younger people, somewhat more educated. In general, you might want to look at something like Mason and Siddharth were looking at these sort of things. I can send you the links later. I was talking a little bit about payments. There's been a whole load of studies looking at querying the Amazon Mechanical Turk pool. Who are you? What's your education? Why are you doing this? And then they ask why are you doing this in various ways. Are you doing it for fun? Are you doing it to kill time? Are you doing this as a supplementary thing? They're in it for money. It's very obvious that they're in it for money. That's OK. And keep that in mind when you post hits. You have a lot of power as a requester. You have a ton of power. You have a ton of power to dictate the terms of what you're going to pay them. You have a ton of power to reject their work. Once they basically do the hit for you, you can then go back and say, you failed this question or you didn't quite get what we wanted, or you actually did the study two months ago but I didn't implement checks for that, I just asked you, did you take the study before and they didn't remember. Something like that. And then you reject their work. And if you reject it too many times, then they get banned. First of all, if you reject, they don't get paid. If you reject it too many times, they get banned. These are people that are doing it either as supplementary or as their main income. I don't know-- this may seem obvious to some of you. If this doesn't seem obvious to at least one of you, then I'll count it as worthwhile to stress this. You don't care about the $0.20. These people do care about the $0.20. And again, it's not because they're necessarily from poor economical backgrounds, but they're in it for the money and that's what they're doing this for. Try to give them fair pay. And we'll stress that again and tell you what I mean by fair pay. Try not to reject them. OK, except in extreme situations. Even if they failed. Even if they didn't do your catch question. Even if you think that they just zoomed through this or something like that, that's usually on you to catch that, as a psychology researcher. You're not a company. You're a psychology researcher. Try not to reject people. Make sure you have some ways set up ahead of time, and I'll get to that, to know who to reject. But don't actually reject people except in really extreme situations. Something about payments is that Amazon takes up about 10. They actually raise this to 20% to 40% of the payments. And this is what I was going to say. There have been some attempts within the psychologists that has been doing Amazon Mechanical Turk studies, and this has also been fueled by the community of Amazon Mechanical Turkers, or Turkers, to establish some sort of guidelines for minimum pay. So if people come into the lab, there are some guidelines on what you're supposed to pay them. There are no exact guidelines. There's no enforcing guidelines, at least not-- maybe there are within particular universities but there's no cross university one guideline to tell you you have to pay people this much. And that might be tempting to say, oh, I'll just pay people, you know, the minimum I can get away with. I mean, if I can pay people $0.05 to do a 10 minute task, I'll do that. Fine, I can get the 20 subjects I need. It's a free market. We're not trying to live here in some sort of capitalist fantasy of some sort. I'm not going to get into economics too much because I think you guys know that better than I do you don't need my lecturing in that sense. But a lot of people who have looked into the ethics of this recommend that you try to estimate ahead of time through a pilot how long is this test going to take. Based on that, pay them such that it matches minimum wage. Minimum wage being somewhat flux, like, you know, I forget if it's like $10 an hour or something like that. It's probably less than that. But something like that. I'm not going to tell you exactly how much you should pay them. But try to figure out more or less minimum wage, more or less how long your task takes and pay them according to that. Some general advantages of Mechanical Turk, in case you guys have not been persuaded yet, let me say. You can run large scale experiments. I started running this experiment on 100 people that would have taken me a long time. It's a silly experiment as Nori pointed out. It's not even exactly an experiment. But I wanted to check how people's responses compared to people in CBMM. First of all, I wouldn't do that in the lab. So there's that. But even if I were to do it in a lab, getting 100 participants would take a long, long time. And for your more serious experiments, getting 100 participants would take a long, long time. Each one has to come in and you have to talk to them. And you have to explain to them exactly what's going on. And you usually can't run them in parallel, or at least you can run only one or two in parallel. Here we run 100 subjects in an hour. And that's still amazing. That's still flooring me. And we can do it, so we can do it very quickly. A lot of people very quickly. And what you can do with large scale experiments is usually test some very fine grain things of your model. If your model has some things like, well, I need to show people all of these different things and make all of these different predictions. And I just need 300 people to do that. Or for example, what my dad was presenting yesterday, these minimal images. Did he mention how many people they had to recruit on Amazon Mechanical Turk to do those minimal images? Yes? It's like it's thousands. I think it's over 10,000 at this point, or something like that. And the reason is because once you've seen that thing, once you've seen the minimal image, you already know what it is. You're biased. You know it's a horse even though a naive participant wouldn't know it's a horse. So you want to make sure that you want 10,000 participants for this thing. You're not going to get 10,000 participants in the lab. No way. So another thing is that, as I said, even if you're paying people minimum wage and things like that, it's still pretty cheap to get 100 subjects. It's cheapish. The ish is because you should pay people some minimum wage. It's replicable, ish. What I mean here by replicable is not what you might think, which I'll get to in another slide. It's just if you want to hand it off to another person. Someone says, I don't quite understand your protocol, or I don't quite believe it, or I want to tweak it in some way. It's much harder, usually, with lab protocols and things like that. We certainly know that in baby experiments. Wouldn't it be nice if we could just port, you know, I won't say who because it doesn't really matter, but some experiments of people. They describe their methods in the paper. It's not really that great. Wouldn't it be great if we could just copy paste their experiment and run it with some tweaking. With this sort of thing, you can. I mean, you need to be somewhat on good terms with the person you're asking for, but they can just tell you, oh yeah, sure. Here's the code for my website. Just run it again. Run it with your tweaks and things like that. There's an ish there and I'll get to it in a second. The participant pool, as I said, it's more diverse. So I was I was harping before about this point that the pool doesn't quite represent the US in general, it's more like the US population on the internet. That's still a lot more diverse than recruiting college students. It's a lot more diverse, let's see, I have here in response to gender, socioeconomic status, geographic region, and age. On all these things that people have tested on Amazon Mechanical Turk, the sample is a lot more diverse, is a lot more representative of the general population, is a lot less weird. Weird being like Western, educated, industrialized, rich, democracies, which is usually the pool that's been studied in psychology. These pools are a lot more diverse. They're not diverse enough for certain things in social psychology, and that's this paper by Weinberg, where he says, you know, sometimes social psychologists really, really, really want to control for making sure that age is not a factor, or something like that. Or they really want to get it what the population is, or age they think plays a factor, or something like that. So you need a population where they call it sort of like knowledge experts. You build some pool. You build some pool that you say, OK, the reason this pool exists is because it's representative of the population. And now we're going to go to this pool and just try them again and again and again on many different experiments. It's sort of like your, you know, not exactly private but shared between some universities pool of social psychology participants. They've tested that and they've shown that mechanical turkers are better than those pools in terms of things like attention, filling out the correct responses, and things like that. So yay Mechanical Turk. But there are some things like implicit biases and things that social psychologists care about, where you don't know if the effect is something like age, or something like that. I don't know if this matters to a lot of you but it's important to keep in mind for those of you who do. Here's this point about will it replicate. You know, some people who are being introduced to Amazon Mechanical Turk, or thinking about it, usually say, yeah, that's fine. But how do I know that people are actually doing what will happen in the lab? I might do it on Mechanical Turk, but then if I do it in the lab, it won't replicate or things like that. So people have tried a bunch of the psychophysics that Leyla was talking about before, and more. They tried stroop, switching, Flanker, Simon, Posner, cueing, intentional, blink, subliminal priming, and category learning. And they've done a whole lot more. This is just from one study by Crump et al., where one of the et al. is Todd Gureckis, who we'll get to in a second. And what they find was basically replication on all of these classic psychology stuff, the sort of effects you would expect. The sort of effect sizes you would expect. The only thing that was a bit different was in category learning where you show people something like, this is a tufa. This is a tufa. This is a tufa. Is this a tufa? Where there are different types of learning, type one being easier, type two being of a little harder, type three being much harder. Where the classic finding was something like a graded thing. And for Amazon Mechanical Turk, it was more-- it was really hard for them beyond type one. Now is that a failure of replication or is that because the original study was done on college students who are young and educated? And this is actually more representative of the population, but this is harder to learn. I'm not quite sure. The takeaway here is that, yeah, it seems like, in general, it will replicate, at least certainly for simple perceptual stuff. Concerns of running things on Amazon Mechanical Turk. And I can send a whole bunch of recent papers that are very nice about it. One of them was specifically-- there's been a bunch of like New York Times papers on Amazon Mechanical Turk in general. There's been a very recent one a few months ago on using it for psychophysics experiments in particular. It's called the Internet's Hidden Science Factory. It's a very nice paper to check out. And they make all these points about the sort of things that you probably thought about as a researcher but it bears thinking about again, which is, people don't necessarily pay that much attention to your task. You have no control. They're not in the lab. They give some quotes there, which is, you know, Nancy's employees don't know-- yeah, I think it's Nancy. I changed the name. Nancy's employers don't know that Nancy works while negotiating her toddlers milk bottles and giving him hugs. They don't know that she's seen studies similar to theirs, maybe hundreds, possibly thousands of times. So that brings us, actually, to another thing, which is repeated exposure. This is a big concern. By the way, sorry, I'm going sort of back and forth here because before I leave attention, I want to mention just one thing, which is attention is a problem you want to put in attention cues. And I'll talk about how to do that in a second. But we've had this a lot with people in the lab. I'm sure that some of you have experienced this as well. You put them in a room because they need to have privacy while they're doing the task, you go in to check on them and they're on their phone. This happens a lot. So attention is something that you want to check in the lab as well. A lot of these concerns are not just about Mechanical Turk, but it's certainly easier for people in Mechanical Turk to not quite pay attention. Repeated exposure is a huge problem. And it's a problem for two different reasons. One is that it destroys intuition. I was asking about the trolley problem and you all went, oh, the trolley problem. People on Turk are doing that even more than you are. They see the trolley problem, they've seen it. I guarantee you, it's very difficult to find a Turker that has not seen the trolley problem. They've seen it. They've seen it 1,000 times. They've seen all the variations. And they complain about it. And they're satiated. And they're sick of it. They will say things like, if I see one more trolley problem, just kill them all. Is there a way to kill the five and the other person on the other side of the track? I don't care anymore. OK, they're completely satiated. It's kind of like saying, hammer, hammer, hammer, hammer. It loses all meaning at some point. You don't have the gut intuitive response. And even if they're doing their best, even if they're not trying to fool you, even if they're honestly trying to answer, they just can't. They don't have the gut intuitive response anymore for the stuff that you're asking them. Some ways to get around that is to try even simple changes. Just don't call it the trolley problem anymore. Set up something else, which is not 5 versus 1, which is 10 versus 2 and it involves pineapples. Like something. You know, these small changes can matter a lot. So that's one thing about repeated exposure, that it destroys intuition and related to that, there's something called super Turkers. These are people that, you know, 1% of Turkers is responsible for about 10% to 15% of all studies. So when I say people have seen it a lot, that's part of the reason. There's a lot of people doing-- the same small group of people is probably responsible for a lot of these studies. The other reason that repeated exposure ruined things for us is because, you know, it just ruins basic correlations. So let me let me give you an example. Who here has heard of the ball and bat question? Who here has not heard of the ball and bat question? OK, let me pose it to you. Those of you who suddenly say, oh yeah, I know this. Sh. I'm interested in people who have not heard this before. So it's a simple question. A ball and a bat together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost? I'll explain that again. A bat and a ball costs $1.10 together. The bat costs dollar more than the ball. How much does the ball cost? Anyone? Shout it out. AUDIENCE: $0.05. TOMER ULLMAN: Yeah, $0.05. Who would have said $0.10, don't be shy. Thank you. Thank you for being brave enough. A lot of people say $0.10. They say immediately. The ball costs $0.10. The bat costs $1. Wait, but it costs $1. No, no, it costs $1 more. So it's-- does everyone get why $0.10 is not the answer, it should be $0.05? OK, good. This was sort of a classic question along with two other questions called the lily pad question and the widget question. The widget question is something like five widget machines make five widgets in five minutes. How long does it take 100 widget machines to make 100 widgets? Five minutes, right. Not 100. These sort of three questions were found out to correlate a lot better with things like IQ tests or even better, sorry, than IQ on a lot of different things. They've been found, you know, many smart people from MIT and Harvard included have failed this question. And the point was, you know, people have made a big deal about it. Like this is even better than IQ tests on all sorts of measures. And it's so simple. We don't have to run 100 IQ test questions. Let's just ask the ball and bat question and then see what that correlates with. And it also relates to, you know, Kahneman makes a big deal about it. It's system one versus system two. System one really wants to answer that it costs $0.10. And you could solve it. Of course you could all solve it. It's not hard for you if you wrote down the simple equation, it's trivial. Right, you wrote it down as like x. And x costs this. And you could all solve this. You could all solve this in middle school. But you don't. Like system to-- some of you do. But usually the first time you hear it, you don't. And you don't-- even if people are warned, like this is a bit of a trick question, you know, think about it, they don't, usually. And people have used this as like ways to test system one-- do people know what I mean when I say system one versus system two? Who doesn't? Raise your hand. So very, very quickly because it's not that relevant to Mechanical Turk, but very, very quickly, you should read Thinking Fast and Slow by Kahneman. The point is that the mind can generally be categorized into two systems, even Kahneman doesn't quite believe that, but it's sort of this thing that's easy to talk about. System one is the fast, heuristic system that gives you the cache response system. Two is the slow, laborious, can't do many things at one time system. That's the sort of thing that you would use to solve algebra problems. That's the reason you need to slow down when you're thinking about something very hard. System one is the sort of thing that gives you biases and heuristics and things like that. This was given as an example of the bat and ball was like one of the prime examples of system one wants to do this, system two wants to do that. System two is lazy, doesn't actually get engaged unless it really, really has to. Why am I bring this up? The reason I bring this up is because people thought it was a great idea to put it on Amazon Mechanical Turk and ask people a lot of questions to see what it correlates with. Everybody knows about the ball and bat problem on Amazon Mechanical Turk. Everybody. Don't think that you're being unique. Don't ask them the widget problem. Don't ask them the lily pad problem. They all know about it. And here it's not a problem for intuitive, you know, being satiated and things like that. Even if they want to, right, I said before, like they want to tell you what the thing is, they just don't have the gut response anymore. Here they just know it. The reason most of you answered, or those of you who knew about it said, haha, $0.05. I know that one. I've solved it before. People on Mechanical Turk know that too. And it's sort of destroyed any sort of measure it had for whatever the heck it was trying to measure. In general, there's sort of this growing sense that since 2010, people have been trying to make Amazon Mechanical Turk more popular for a few years. It feels almost like an overexploited resource, a little bit. Like you have this tribe which doesn't know about numbers. Let's all go and study them and ask them a billion questions. And the reason they don't know about numbers is because they don't know English. And by the time we are done with them, they will know English. And the one thing they'll know in English is how to count to 10 because we've asked them all these questions. Amazon Mechanical Turk feels a little bit like this over exploited resource at sometimes. These have been more concerns for you as a requester working on Amazon Mechanical Turk, things to sort of keep in mind and watch out for. Here are some concerns for people on Mechanical Turk that you can try to alleviate. And these sort of things about-- I've already mentioned sort of two of them about low pay and rejecting people for no good reason and things like that. One more thing I want to point out is this thing about no de-briefing. When people coming to the lab, you can tell them why they were in the study. That's a basic part of the protocol of psychology. When people are on Amazon Mechanical Turk, it's a good thing to put it in the experiment, why were you in this experiment if you have such a thing, and you're running an experiment. People might drop out in the middle, might decide it's not for me, actually, I'm done. And they never actually figure out what the point of the experiment was. You might say, well, who cares? But you might say, well, who cares to de-briefing people in the lab. If there's a reason for de-briefing people in the lab there's probably a good reason for de-briefing people on Mechanical Turk. If they drop out before their de-brief, that's kind of a problem. I don't have a good solution for it. It's something to keep in mind. There's also the problem that you should keep in mind and report, probably, if it's a problem, the amount of people who have dropped out in the middle because otherwise it can lead to all sorts of small effects. Like if your task is really hard, and a lot of people drop out, you had like a 90% dropout and then you say, oh, people are brilliant at my task because the 10% who actually stuck through with it are the sort of crazy people who are willing to do it. That's actually really, really skewed. So keep in mind the dropout should be low. It should be like a few percent or something like that. The flip side, by the way, of the no de-briefing problem is a coercion problem. So when you bring people into the lab and you say, you know, do the study. You can stop at any time. No problem at all let me shut the door and wait over here. There's sort of a slight feeling of coercion, even if the study is not something they really are enjoying doing. They'll still do it because they feel pressured to. That problem doesn't exist, at least on Amazon Mechanical Turk. I mean, they're in it for the pay and there's some cost and all that. But if they really don't want to do it, if it offends them in some way, they'll stop. So that's actually a bonus. Some general tips. Let's see. And I think I'll have two more slides and then we'll wrap it up. Some general tips. In general, when you're thinking about your task, the lower level it is, the closer level it is to-- think about the Stroop task. OK, let's put the Stroop task in one case and the question I actually asked you in another case. The Stroop task is low level, hard to beat, even if you've seen it a million times, you will still find that effect. If your task is like that, you should expect to replicate it. If it's much more higher level, the sort of thing that relies on them not having seen that before, it's harder to replicate. You want to make sure that people who see it have not seen it before. If it's high level and relies on some sort of zeitgeist, like what people think about AI right now, in two years you'll find a different result. Don't expect the thing that I put online today to replicate in some sense in two years. Have participants give comments. At the end of your survey, once they're done, before the de-briefing, collect some demographics and leave them something optional to just say, what did you think of this study? Do you have any comments? Or anything like that. Most of the time you won't get any comments. After that, the most likely thing is they'll say, fun study. Thanks, if you've done it correctly. Interesting, stuff like that. Some studies we've done have been crazy and they're very, you know, they really like it. And it's nice to get good feedback. Or they'll tell you something like, that was really interesting, could you tell me a bit more. Here's my email address. Or this button that didn't work for me. Or I was actually a bit bothered by the fact that you said that you would kill the robot. Things like that. Or you ask them things like-- give them comments like, why. You don't put that necessarily in your experiment. But you tell them, like, you know, do this. Do that. Make a decision. Why? Like the trolley problem. Why? OK, it's the sort of thing that's not likely to be published immediately, but it's definitely the sort of thing that will help you think of where to take your experiment next. It's very, very important to communicate. And what I mean by that is give them an email at the beginning to reach you in some way, say like, you know, in the consent statement. You're doing this experiment. You're doing it for x. Here's a way to reach x if you want to. AUDIENCE: Do you give them your actual email? TOMER ULLMAN: You can set up a bogus email and the sense of, like, tomer@mechanicalturk. I personally give them my email at MIT. I do. And make sure that you respond to their concerns. And they will write to you, especially if something goes technically wrong. They'll say, like, you know, the screen didn't load for me. I forgot to paste in the code that you wanted, things like that. Or this didn't work quite well. Write back to them. Explain what happened. If they want to know what the study is about, you should explain to them what this study is about. You should do that for two reasons. It feels silly to mention this. I'm sure you're all, you know, you've figured it out by yourselves. But I'll mention it anyway, just on the chance that there's one person that says, oh yeah, that's a good reason. For two reasons, one is that they'll like you a lot more. OK, these people, they go to their own forums. There's a lot of Mechanical Turk forums. There's hundreds of them. And they tell each other what things they should do. They do a good job of policing each other. They're like, they try never to post answers to things, or like, oh you can do this by answering this question. No. It's like, they don't tolerate that because they know that we don't tolerate that. You want them to like you in that sense. You want to get good reviews on these things. And one way to garner good favor is to communicate. The other reason is because it's just a good idea. These are people in the public. You wouldn't think about not answering a question of someone who came into the lab and asked you something. Keep in mind, there's a real people behind the screen. Make sure that you treat them as real people. I don't mean-- I sound like I'm berating you or something, like that you guys have not been communicating and it's awful. No. That's not the point. I'm sure you all mean to do that I'm just trying to emphasize it. Like I said before, don't reject unless it's an extreme situation. Also, decide ahead of time how you're going to reject. Decide ahead of time on a catch question, something I'll get to in a second. And say, I'm going to reject people if they do this and stick to it. Because otherwise, if you don't do that, then when it comes time to actually try to write a paper you'll say, well, I think I'll try throwing out all the people that did it in under 20 seconds because I don't think they were paying attention that much. Maybe 30 seconds. Yeah, this test should really take 40 seconds. You got the point. Decide ahead of time on the rejection criteria. Have good catch questions. This is good both for, you know, knowing who to reject and making sure that they're paying attention. Catch questions are the sort of thing that you would put in the middle of your survey or at the end of survey, or at the start of the survey, just to make sure that they are paying attention. Ideally it would be also that they have actually read the instructions and know what they're supposed to be doing. So sometimes even if they're paying attention, they didn't get the instructions or something like that. There's a bunch of different ways of doing this. One way of doing it, you know, I'm sure you guys can come up with your own ways I'm just giving you some examples of the stuff that people I know have done. Toby, for example, that you've seen doing some counterfactual stuff maybe, he just gives people a screen with the instructions and asks them some questions. And until they get the question right, they don't move onto the next screen. So he doesn't reject them later. He just says, in order to pass to the next screen, you have to answer this question correctly. And he has some way of checking that. That's been really good. Like once he implemented that, the data is much, much better and cleaner. Here's the sort of catch questions you don't necessarily want to do. They're very popular. You don't necessarily want to do them. They're things like, have you ever had a fatal heart attack. And the answer is, of course, no. Have you ever eaten a sandwich on Mars. It's the sort of thing that like you're trying to catch people that are going through it very quickly and are just marking things randomly. One of the reasons you don't want to do that is because even if they're answering randomly yes or no, you'll still miss 50% who just got it right by error. The other reason is the standard stuff. I mean, I'm sure you guys could come up with something. But there's a lot of examples out there. The two examples I just gave you, the Martian, the fatal heart attack, this is stuff that gets used over and over and over again. And they sort of just know it. One of them said like, any time I see the person I told you before was like juggling kids and trying to answer at the same time. He says, oh yeah, whenever I see the word vacuum, I know it's time for an attention check because it's going to be like, have you ever eaten a sandwich in a vacuum or like something like that. But whenever I see vacuum, it's obviously an attention check. You don't want to do that. Ideally, you want to have something that relates to the task. So in one of our examples, we were doing some sort of Turing task. And we just wanted to say, like, here, complete the following sentence. You were playing the game against a-- and then it's an open text box. OK, some of these people have like automatic robots that fill it in. So they'll do something like, thanks. Or yes. Or something like that. Then they just hope that yes will match. But here the correct answer was robot. You were playing the game against a robot, or against a human, or something like that. Did people get that example? OK. So ideally, the good catch question is an open field, something that's not just you can click and get right by mistake and relates something to the instructions that you were giving. This is not a tip. This is something you should do. Again, it's trivial. You'll have to do it if you're thinking of running it in your own university and your university has never done Mechanical Turk before, get IRB approval specifically for Mechanical Turk. So just make sure you get IRB approval. Make sure you get informed consent at the beginning of your study, say like, we're going to be doing this. If it's OK click on this button that says, I agree. Usually the IRB will force you to do that anyway. And as I said, ethical pay, I just keep going back to that. OK, there's various helpful tools for running experiments. If any of you are interested in this, reading more about it, much more in depth how to actually run an experiment and things like that, come talk to me afterwards. Or look at Todd Gureckis's website. Other websites that you should check out. These are the forums you should probably know about, TurkerNation, mTurkGrind, and TurkOpticon. These are useful both to get a sense of how your task is doing, are people sort of responding to it saying something about it. It's also a good place to publicize your study. If I need 300 participants within two hours, I can put it on Turk and hope the pay is enough. Or I can put it on mTurkGrind. People there have liked our tasks before and they'll give you a thumbs up and you'll get 200 people within an hour just because they know about you and they know that you're an OK person, and you communicate. So it's a good practice to get a user name on one of these things, or both of them, when you run an actual experiment. Explain what you're doing. Put it on there. Be willing to answer questions. TurkOpticon is the usual thing that will then be used-- this is what one of these forums looks like, by the way. They're like, you know, oh it's terrible. It's Tuesday. What should we do today? Here's how much I've done today. Somebody says, I didn't see this posted anywhere. Very interesting to those of you who want to do 3D printing. Then there's like the title of the experiment and a description of it. And people can just click directly from there to the experiment. Usually the experiments that they post on the forums looks something like this. They're getting these numbers off of that last website that I just mentioned, TurkOpticon. So usually when people want to rate you on Mechanical Turk, they're not going to complain to Amazon because Amazon's not going to do anything about it. Like I said before, requesters have a lot of power and Amazon doesn't bother to arbitrate, usually. So one of the ways that they do have of either rewarding or punishing you is to go to TurkOpticon and give you a rating for communication, generosity, fairness, and promptness. And those numbers will then be used on most of the forums when you publish your task. People can see, like, these guys have actually cheated people before, or something like that. So you can go, once you've registered, go to TurkOpticon and you can check out your own ratings and things like that. So yeah, in summary, Mechanical Turk is a wonderful resource. I hope I haven't scared you away from it, those of you who are thinking about doing it. It amplifies everything. You can do a lot more a lot faster. But it doesn't get rid of concerns, it amplifies those concerns. So things like ethical concerns and payment concerns, and things like that, the sort of same sort of concerns that you would have in the lab, those are included there, and other magnified 100-fold because you're recruiting a lot more people. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_35_Josh_Tenenbaum_The_Child_as_Scientist.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOSHUA TENENBAUM: So we saw in Laura's talk this introduction to both this idea of the child as scientist, all the different ways that children's learning seems to follow the many different practices that scientists use to learn about the world, not just the data analysis, in a sense. As much as various kinds of statistics on a grand scale, whether it's Hebbian learning or backprop important for learning, we know that that's not all that children do, just like analyzing patterns of data is not all that scientists do. And then Laura added this other cool dimension, thinking about the costs and rewards of information, when is it worth it or not. And you could say, maybe she was suggesting that we should develop the metaphor, expand it a little bit from child as scientist to maybe like the child as-- oh, oops, sorry-- lab PI or maybe even NSF center director. Because as everyone knows, but Tommy certainly can tell you, whether you're a lab PI who's just gotten tenure or the director of an NSF Center, you have to make hard-nosed pragmatic decisions about what is achievable given the costs and which research questions are really worth going after and devoting your time and other resources to. And that's a very important part of science. And it's an important part of intuitive knowledge. I want to add another practical dimension to things, a way of bringing out, fleshing out this idea of the child as scientist. You can think about all these, these are metaphors. But they're things that we can formalize. And if our goal is to make computational models of and ultimately to get some kind of theoretical handle on, then these are helping us. By adding in the costs and benefits, you bring in utility calculus. And there's not just naive utility calculus, there's a formal mathematical utility calculus of these kinds of decisions. Julian Jara-Ettinger, who Laura mentioned, was driving a lot of the work towards the end of the talk, has done some really interesting actual mathematical computational models of these issues, as has [? Choi ?] and the other students you talked about. So the direction I want to push here is what you might call the child as hacker. This is trying to make the connection back to this idea of formalizing common sense knowledge and intuitive theories as probabilistic programs. Or just more generally, the idea of a program-- a structured algorithm and a dat-- or some combination of algorithms, data structures, networks of functions that can scribe interesting causal processes in the world, like, for example your intuitive physics or your intuitive psychology. That's an idea that we talked a lot about last week or whenever it was, earlier in the week or last week. And this idea that if your knowledge is something like a program, or a set of programs, that you can, say, for example, run forward to simulate physics, like we had last time, then learning has to be something like building a program or hacking. And I think you could make-- this is, again, a research program that I wish I had. Like Laura talked to the end about the research she wished she had. It's not really just a wish for her. She actually is working towards that research program. And this isn't just an empty wish either. It's something that we're working on, which is to try to take-- just as Laura had that wonderful list of all the things scientists do in their practices to learn about the world, I think you could make a similar list of all the things that you do in your programming or hacking. By hacking I don't mean like breaking into a secure system, but modifying your code to make it more awesome. And I use awesome very deliberately, because awesome is a multi-dimensional term. It's just awesome. But it could be faster, more accurate, more efficient, more elegant, more generalizable, more easily communicated to other people, more easily modularly combined with other code to do something even more awesome. I think there's a deep sense in which that aesthetic behind hacking and making awesome code, in both an individual and a social setting, that's a really powerful way to think about many of the cognitive activities behind learning. And it goes together with the idea of the child as scientist if the form of your, quote, "intuitive science" are computer programs or something like programs. So we've been working on a few projects where we've been trying to capture this idea and to say, well, what would it mean to describe computationally this idea of learning as either synthesizing programs, or modifying programs, or making more awesome programs in your mind. And I'll just show you a few examples of this. I'll show you our good, our successful case studies, places where we've made this idea work. But the bottom line to foreshadow it is that this is really, really hard. And to get it to work for the kinds of knowledge that, say, Laura was talking about or that Liz was talking about, the real stuff of children's knowledge, is still very, very open. And we want to, basically, build up to engaging why that problem is so hard. From this to what Tomer and then Laura will talk about later on in the afternoon. But here's a few at least early success stories that we've worked on. One goes back to this idea that I presented in my lectures last week. And it connects to, again, something that Laura was saying. Here, a very basic kind of learning. It's just the problem of learning some generalizable concepts at all from very sparse evidence. Like one-shot learning-- again, something you heard from Tommy, a number of the other speakers. We've all been trying to wrap our heads around this. How can you learn, say, any concept at all from very, very little data, maybe just one or a few examples. So you saw this kind of thing last time. And I briefly mentioned how we had tried to capture this problem as something like this by building this tree structured hypothesis space. And you could think of this as a kind of program induction. If you think that there's something like an evolutionary program which generated these objects, and you're trying to find the sub procedure of it that generated just these kinds of objects. But that's not at all how we were able to model this. We had a much simpler model. But let me show you briefly some work that we did in our group a couple of years ago. It's really just getting out into publication now. This is work that was mostly done by two people-- Ruslan Salakhutdinov, who is now a professor at Toronto, although about to move to Carnegie Mellon, I think, and Brenden Lake. He's a machine learning person, also very well known for deep learning. And then Brenden Lake-- this is really mostly what I'll talk about is Brenden Lake's work, who is now a post-doc at NYU. And again, where we think we're building up to is trying to learn something like the program of an intuitive physics or intuitive psychology. But here we're just talking about learning object concepts. And we've been doing this work with a data set of handwritten characters, the ones you see on the right here. I'll just put it up in contrast or by comparison to, say, this other much more famous data set of handwritten characters, the MNIST data set. How many people have seen the MNIST data set, maybe in some of the previous talks? How many people have actually used it? Yeah, it's a great data set to use. It's driven a lot of basic machine learning research, including deep learning. Yann LeCun originally collected this data set and put this out there. And Geoffrey Hinton did most of the development. The stuff that now wins object recognition challenges was done on this data set. But not only that. Also a lot of Bayesian stuff and probabilistic generative models. Now, the thing about that data set, though, is it has a very small number of classes, just the digits 0 through 9, and a huge number of examples, roughly 10,000 examples in each class or maybe 6,000 examples, something like that. But we wanted to construct a data set which was similar in some ways in its complexity and scale, but where we had many, many more concepts and, perhaps, many fewer examples. So here we got people to write by hand characters in 50 different alphabets. And it's a really cool data set. So that total data set has 1,623 concepts drawn. You could call them handwritten characters. You could just call them simple visual concepts as a sort of a warm up for bigger problems of, say, natural objects. And there's 20 examples per class. So there's roughly 30,000 total data points in this data set, very much like MNIST. You can see, just to illustrate here, there's many different alphabets that have very different forms. You can see both the similarities and differences between alphabets here. So in that sense, there's kind of a hierarchical structure. Each one of these is a character in an alphabet. But there's also the higher level concept of a sort of a Sanskrit form, as distinct from, say, to Gaelic, or Hebrew, or Braille. There's some made-up alphabet. But one of the neat things about this domain is that you can make up new concepts, and you can make up whole concepts of concepts, like whole new alphabets. You can do one-shot learning in it. So let's just try this out here for a second. You remember the tufa demo. We can do the same kind of thing here. Like let's take these characters. Anybody know the alphabet that this is? OK, that's good. Most of you have not seen these before. That's good that you know. But we'll do this experiment run on the rest of you. So here's one example of a concept. Call it a tufa if you like. And I'll just run my mouse over these other ones. And you just clap when I get to the other example of the same class, OK? [SOUND OF CLAPS] OK, very good. Yeah, people are, basically, perfect at this. It doesn't take-- I mean, again, it's very fast and almost perfect. And again, you saw me talk a little about this last time. Just like with natural objects, not only can you learn one of these concepts from one example and generalize it to others, but you can use that knowledge in various other ways. So you can parse these things into parts. We think that's part of what you're doing. You can generate new examples. So here are three different people all drawing the same character. And in fact, the whole data set was generated that way. You can also make higher level generalizations, recombining the parts into totally new concepts, the way there's that weird kind of like unicycle thing over there, unimotorcycle. Here, I can show you, as you'll see 10 characters in a new alphabet, and you can make up hypothetical, if, perhaps, incorrect examples in it. Again, I'm just going to show you a couple of case studies of where this idea of learning as program synthesis might work. So the idea here is that, as you might see, these are three characters down on the bottom. And this is just a very schematic diagram of how our model tries to represent these as simple kinds of programs. Think about how you would draw, say, that character down to the bottom. Just try to draw it in midair. How would you draw that one in the lower left there? Are many of you doing something like this? Is that what you are doing? OK, yeah. So basically, everyone does that. And you can describe that as sort of having two large parts or two strokes, where you pick up your pen between strokes. And one of the strokes has two sub strokes, where you stop your pen. And there's a consistent relationship. The second stroke has to begin somewhere on a particular general region of the first stroke. And basically, that's the model's representation of concepts-- part, subparts, and simple relations-- which, you can see, it might scale up, arguably, to more interesting kinds of natural objects. And the basic idea is that you represent that, though, as a program. It's a generative program. It's kind of like a motor program. But it's more abstract. We think that when you see these characters and many other concepts, you represent something about how you might create it. But it doesn't mean it's in your muscles. You could use other hands. You could use your toe. Or you could even just think about it in your imagination. So the model, basically, tries to induce these simple-- think about them as maybe simple hierarchical plans, simple action programs. And it does it by having a program generating program that can itself have parameters that can be learned from data. So this right here, this is a program called GenerateType. And what that does is it's a program-- a type means a character concept, like each of those three things is a different type. This is a program which generates a program that generates the actual character. The second level of program is called GenerateToken. That's a program which draws a particular instance of a character. And just like you can draw many examples of any concept, you can call that function many times-- GenerateToken, GenerateToken, GenerateToken. So your concept of a character is a generative function. And in order to learn this, you have, basically, a prior on those programs that comes from a program generating program. That's the GenerateType program. So there's a lot of details behind how this works. But basically, the model does a kind of learning to learn up from a held out unsupervised set and learns the parameters of this program generating program, which would characterize how we draw things in general, what characters look like in general. And then, when you see a new character, like this one, effectively, what the model is doing is it's both parsing this into its parts, and subparts, and relations. But that parsing is, basically, the program synthesis. It is pretty much the same thing. You're constructing, you're looking at that output of some program and saying, what would be the best simple set of parts, and subparts, and relations that could draw that? And then I'm going to infer the most likely one, and then use that as a generalizable template or program I can then generate other characters with. So here, maybe to just illustrate really concretely, if you were to see this character here-- well, here's one instance of one class. Here's an instance of another class. Again, I have no idea which alphabet this is. Now, what about this one? Is it class 1 or class 2? What do you think? 1, yeah. Anybody think it's class 2? OK. So how do we know it's class 1? Well, at the pixel level, it doesn't look anything like that. So this is, again, an example of some of the issues that Tommy was talking about-- a really severe kind of invariance. But it's not just translation or scale invariance, although it does have some of that. But it also has this kind of interesting within-class invariance. It's a rather different shape. It's been distorted somewhat. For a program, there's a powerful way to capture that where you can say, well if you would do something like the program for generating this, which is like one stroke like that and then these other two things shown with the red and green, and here's a program that you might induce to generate that. And then the question is, which of these two programs, simple hierarchical motor programs, is most likely to generate that character? Now, it turns out that it's incredibly unlikely to generate any character from one of these programs. These are the log scores, the log probabilities. So this one is like 2 to the negative 758. And this one is like 2 to the negative 1,880. I don't know if it's base e. It's maybe 2 or e, but whatever. So each of these is very small. But this one is like 1,000 orders of magnitude more likely than that one. And that makes sense, right? It just is easier to think intuitively about generating this shape from that distortion. So that's basically what the system does. And it's able to do this remarkable thing that you were able to do too-- this one-shot learning of a concept. Here's just another illustration of this. We show people one example of a new character in an alphabet they don't know and ask them to pick out the other one. Everybody see where it is here? It's not that easy, but it's doable. Down here, right. So people are better than 95% correct at this. This is the error rate. So the error rate is less than 5% for humans and also for this model. But for a range of more standard deep learning models, this one here is, basically, like an image net or MNIST type 1. So this is the kind of model that was really sort of massive convolutional classifier. The best deep learning one is actually something for this problem. It's what's called a Siamese ConvNet. And that can do somewhat better. But it's still more than twice as bad as people. So we think this is one place where, at least in a hard classification problem, you can see that deep learning still isn't quite there. Whereas this-- even the best thing-- this was a network that was, basically, specifically worked out by one of Ruslan's students for about a year to solve exactly this problem on this data set. And it substantially improved over a standard deep learning classifier, which substantially improved over a different deep learning model that Ruslan and I both worked on. So there's definitely been some improvement here. And never bet against deep learning. I can't guarantee that somebody sets, spends their PhD. They could work out something that could do this well. But still, it's a case which still has some room to push, where, for example, just a pure pattern recognition approach might go. But maybe more interesting is, again, going back to all the things we use our knowledge for, kids might use our knowledge for. As we don't just classify the world. We understand it. We generate new things. We imagine new things. So here's a place where you can use your generative program that none of these networks do, at least by nature. Maybe you could think of some way to get them to do it. And this is to say, not just classify, but to produce, imagine new examples. So here's an illustration of this where we gave people an example of one of these new concepts. And then we said, draw another example of the same concept. Don't just copy it. Make up another example of the concept. And what you can see here is a set of nine examples that nine different people did in response to that query. And then you can also see on the other side nine examples of our program doing that. Can anybody tell which is the people and which is the program? Let's try this out. So which is the machine for this character, the left or the right? How many people say the left? Raise your hand. How many people say the right? About 50-50, very good. How many people say this is the machine for this one? How many people say this is the machine? May be slight preference there. How many people say this is the machine? How many people say this is the machine? How many people say this is the machine? Some people really like the left. How many people say that's the machine? Basically, it's 50/50 for all of them. Here's the right answer. I don't know, you could decide if you were right or not. I don't know, Here's another set. Again, I hope it's clear that this is not an easy task. And in fact, people are, basically, at chance. We've done a bunch of studies of this. And most people just can't tell. People on average are about 50% correct. You basically just can't tell. So it's an example of a kind of Turing test that a certain interesting program learning program is solving. At the level that's confusable with humans, this system is able to learn simple programs for visual concepts. And not just classify but use them to create new things. You can even create new things at this higher level that I mentioned. So here, the task, which, again, people and machines are roughly similar on, is to be given 10 examples each of a different concept in a higher level concept, like an alphabet, and then draw new characters in that alphabet. And we give people only a few seconds to do this. So they don't get too artistic. But again, you can see that machine is able to do this. People are also kind of similar. So let me say, that was a success story, a place for the idea of learning as program induction kind of works. What about something more like what we're really most deeply interested in-- children's learning? Like the ability, for example, to, say, understand goal-directed action. These cases we've talked a lot about. Or intuitive physics-- again, cases we've talked about. And it's part of our research program for this center, something we'd love all of you guys, if you're interested, to help work on. It's a very big problem. It's how do you characterize the knowledge that kids are learning over the first few years and the learning mechanisms that build it, which we'd like to think of in some similar way. Like could we say there's some intuitive physics program and intuitive physics program learning programs that are building out knowledge for these kinds of problems? And we don't know how to do it. But again, here are some of the steps we've been starting to take. So this is work that Tomer did as part of his PhD, and it's something that he's continuing to do with Liz and others as part of his post-doc. So we're showing people-- again, it's much like what you saw from me and from Laura. We're really interested in learning from sparse data. Because all the data is sparse in a sense. But in the lab, you push things to the limit. So you study really sparse things, like one-shot learning of a visual concept. Or here, this is like we've been interested in what can you learn about the laws of physics from just watching that for five seconds. So we show people videos like this. Think of this as like you're watching hockey pucks on an air hockey table. So it's like an overhead view of some things bouncing around. And you can see that they're kind of Newtonian in some sense. They bounce off of each other. Looks like there's some inertia, inertial collisions. But you might notice that there's some other interesting things going on that are not just F equals m a, like other interesting kinds of forces. And I'll show you other ones. Tomer made a whole awesome set of these movies. Hopefully, you've got some idea of what's going on there. Like interesting forces of attraction, and, repulsion, different kinds of things. So here, each of those can be described as a program. And here's a program generating program, if you like. So the same kind of idea, just as in the handwritten character model I showed you. It's not like it's learning in a blank slate way from scratch. It knows about objects, parts, and subparts. What it has to learn is, in this domain of handwritten characters, what are the parts and relations like. And then for the particular new thing you're learning, like this particular new concept, what are its particular parts and relations. So there's these several levels of learning where the big picture of objects and parts is not learned. And then the specifics for this domain of handwritten characters, the idea of what strokes look like. That's learned from sort of a background set. And then your ability to do one-shot learning or learning from very sparse data of a new concept takes all that prior knowledge, some of which is wired in, some of which is previously learned, and brings it to bear to generate a new program very sparsely. So you have the same kind of thing here. We were wiring in, in a sense, F equals m a, the most general laws of physics. And then we're also wiring in sort of the possibility that there could be kinds of things and forces that they exert on each other, as some kinds of things exert other kinds of forces on others. And that there could be latent properties, things like mass and friction. And then what the model is trying to do is, basically, to learn about these particular properties. What's the mass of this kind of object? What's the friction of this kind of surface? Which objects exert which kind of forces on each other? Is there something like gravity blowing everything to the left, or the right, or down? What this is showing here is the same kind of plots we saw from me last time. It's a plot of people versus model, based on a whole bunch of different conditions of the sort you saw. People are judging these different physical properties. And they're making greater judgments of how likely it is, basically, to have one of these properties or another. And there's the model on the x-axis, people on the y-axis. And what you can see is a sort of OK decent fit. We characterize this experiment as a kind of a mixed success. I mean, it's sort of shocking people can learn anything at all. Like how much could you learn about the laws of physics from five seconds of observation? Well, it's also kind of shocking that Newton could learn about the laws of physics by just looking at, you know, in the history of the universe, about five seconds or less worth of data that people had collected for the planets going around. So it is the nature of both science and intuitive theory building that you can get so much from so little. But people are not Newton here. They're just using intuition. They're making quick responses. And they're OK. There's a correlation, but it's not perfect by any means. One of the things that we're working on right now is looking at, say, what happens if you can, unlike, say, Newton, go in and actually intervene and push these planets around. Hopefully you'll do better. But stay tuned for that. The basic thing here, though, is that people can learn something from this. But the way our model works is it's not very satisfying for us as a view of kind of program induction or program construction. Because we think it just knows too much or has, basically, all the form of the program. And it's estimating some parameters. It's like one of the things you do as a hacker, as a coder, is you have your code and you tune some parameters. Or you try to decide if this function is the right one to use or that one. And this is doing that. But nowhere is this like actually writing new code, in a sense. And that's just the really hard problem that I wanted to mostly leave you with and set up going what we're going to do for the rest of the afternoon. Like if you wanted to not just tune the parameters and figure out the strength or existence of different forces, but actually write the form of laws, how would you do this? What's the right hypothesis space? So you'd need programs that don't just generate programs but actually write the code of them in a sense. And what's an effective algorithm for searching the space of these theories? It's very, very difficult. I think, Tomer, are you going to show this figure at all? Yes, so mostly I'll leave this to Tomer. But there's a very striking contrast between the nice optimization landscapes for, say, neural networks or most any standard scalable machine learning algorithm, whether it's trained by gradient descent or convex optimization, and the kinds of landscapes for optimization and search that you have if you're trying to generate a space of programs. If you want to see our early attempts to try to do something like learning the form of a program, look, for example, at stuff that Charles Kemp did. Part of his thesis that was published in PNAS a few years ago, where he tried to generate or, basically, have-- think of generative grammars for graphs. Think about the problem-- so Laura mentioned Darwin. How did Darwin figure out something about evolution without understanding any of the mechanisms? Or the more basic problem of figuring out that species should be generated by some kind of branching tree process versus other kinds. Remember last time when I talked about various kinds of structured probabilistic models, tree structures, or spaces, or chains for threshold reasoning. So Charles did some really nice work, basically trying to use the idea of a program for generating graphical models, like there's a grammar that grows out graphs. And he showed how you could take data drawn from different domains, like, say, those data sets you saw before of animals and their properties. We spent an hour on that last time. So Charles showed how you could induce not only a tree structure but the higher level fact that there is a tree structure. Namely, a rule that generates trees being the right abstract principle to, say, give you the structure of species in biology, whereas other rules would generate other kinds of structures. So for example, he took similar data matrices for how Supreme Court judges voted and was able to infer a left-right, liberal-conservative spectrum. Or data from the proximities between cities and figure out a sort of cylinder, like latitude and longitude map of the world just from the distances between cities. Or take faces and figure out a low dimensional space as the right way to think about faces. So in some sense, this was really cool. We were really excited. Oh, hey, we have a way to learn these simple programs which generate structures, which themselves generate the data. It's where that idea of hierarchical Bayes meets up with this idea of program induction or learning a program. And it even captured-- OK, this is really the last slide I'll show. It even captured something which captured all of our imaginations. We use this phrase "the blessing of abstraction" to tie back into one more theme of Laura's, which is this idea that when kids are building up abstract concepts, there's a sense in which, unlike, say, a lot of maybe traditional machine learning methods or a lot of traditional ideas and philosophy about the origins of abstract knowledge, it's not like you just get the concrete stuff first and layer on the more abstract stuff. There's a sense often, in children's learning as in science, in which the big picture comes in first. The abstract idea comes there, and then you fill in the details. So for example, Darwin figured out, in some sense, the big picture. He figured out the idea that there was some kind of branching process that generated species that was random. Not a nice perfect Linnean seven-layer hierarchy but some kind of random branching process. And he didn't know what the mechanisms were that gave rise to it. And similarly, Newton figured out something about the law of gravitation and everything else in his laws, though he didn't know the mechanisms that gave rise to gravity. And he didn't even know g. He didn't even know the value of the gravitational constant. That couldn't be estimated for 100 years later. But somehow he was able to get the abstract form. And these nice things that Charles Kemp did were also able to do that. So for example, from very little data, to figure out that animals should be generated by some kind of a tree structure, as opposed to, say, the simpler model of just a bunch of flat clusters. That model was able to figure that out, over here on the right, from just a small fraction of the data. And then with all the rest of the data, it was able to figure out the right tree in a sense. And we called this "the blessing of abstraction," this idea that often, in these hierarchical program learning programs, you could get the high level idea before you got the lower level idea and then fill in the details. And I still think there's something fundamentally right about this idea of children's learning, both representationally and mechanistically. And that this dynamics of sometimes getting the big picture first and using that as a constraint to fill in the details is fundamentally right. But actually, understanding how this-- either algorithmically-- how to search the space of programs for anything that looks like an intuitive causal theory of physics and relate that to the dynamics of how children actually learn. That's the big open question that I will now hand over to our other speakers. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Seminar_5_Tom_Mitchell_Neural_Representations_of_Language.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. TOM MITCHELL: I want to talk about some work that we're doing to try to study language in the brain. Actually, to be honest, this is part of a grander plan. So here is what I'm really doing with my research life. I'm interested in language, and so I'm involved in two different research projects. One of them is to build a computer to learn to read. And we have a project which we call our Never Ending Language Learner, which is an attempt to build a computer program to learn to read the web. NELL, we call it, has been running nonstop, 24 hours a day since 2010. So it's now five years old. If you have very good eyesight, you can tell that everybody's t-shirt there in the group is wearing a NELL fifth birthday party t-shirt. But it's an effort to try to understand what it would be like to build a computer program that runs forever and gets better every day. In this case, its job is to learn to read the web. It is getting better. It currently has about 100 million beliefs that it has read from the web. It's learning to infer new beliefs from old beliefs. It's a better reader today than it was last year. It was better last year than it was the year before. It's still not anything like as competent as you and I, but it's one line of research that you can follow if you're interested in understanding language understanding. The other thread, which is what I'm going to talk about tonight, which is in the bottom half here, is to study how the brain processes language by putting people in brain imaging scanners of different types, and showing them language stimuli, and getting them to read. So I'm going to focus really on the bottom part. But I can't really talk about this honestly unless I fess up to the fact that my goal is for these two projects to collide in a monstrous collision. They haven't yet, although you'll see some signs, I hope, tonight, of some of the cross-fertilisation between the two areas. When it comes to the brain imaging work, we have a very great team of people. One of them, Nicole Rafidi, is sitting right here. Some of you have already met her this week. And so what I'm going to present is really the group work of quite a few people. And the idea is simple, but here's the brainteaser. Suppose you're interested in how the brain processes language, and you have access to some scanning machines, then what would you do? And so we started out by showing people in a scanner stimuli like these. Maybe single words, initially nouns like camera, and drill, and house, and saw. Sometimes pictures, sometimes pictures with words under them. But just showing people stimuli to get them to think about some concept. And then we collect a brain image, like this one, which we collected when a person was looking at this particular stimulus, a bottle. And this is posterior, this is the back of the head on top. This is the front of the head at the bottom here. And these four slices are four out of about 22 slices of the brain that make up the three dimensional image. And so you can see here what the brain activity looks like-- kind of blotchy-- when one particular person thinks about, bottle. So you might ask, what does it look like if they think about something else? Well, I can show you what it looks like on the average. If we average over 60 different words, then here's the brain activity. And you can see that it looks a lot like bottle, but maybe there are some differences. And in fact if I subtract out this mean activity from the brain image we get for bottle, then you can see the residue here. There are in fact some differences in the activity we see for bottle compared to the mean activity over many words. Whether that's signal or noise, I guess you can't tell by looking at this picture. But that's the kind of data that we have if we use fMRI to capture brain activity while people read words. So the first thing you might think of doing if you had this kind of data would be to train a machine learning program to decode from these brain images which word somebody is thinking about it. And so we, in fact, began that way by training classifiers where we'd give them a brain image. And during training time we would tell them which word that brain image corresponds to. And then after training we could test the classifier to see whether indeed it had learned the right pattern of activity by showing it new brain images and having it tell us, for example, is this person reading the word hammer or bottle. And, in fact, that works that works quite well. And, in fact, if you try it over several different participants in our study, you can see we get classification accuracies for a Boolean classification problem. Are they reading a tool word like hammer, saw, chisel, or a building were like house, palace, hotel. Then, depending on the individual person, we can get in the high 90% accuracy or a little worse. In fact, if you ask why it's not the same for all people, it turns out the accuracy that we get correlates very well with measure of head motion in the machine. So a lot of this is noise. But the bottom line here is good. fMRI actually has enough resolution to resolve the differences in neural activity between, say, thinking about house versus hammer. And machine learning methods can discover those distinctions. So that's a good basis. And so given that, you can start asking a number of interesting questions. Like we could ask, well, what about you and me? Do we have the same pattern of brain activity to encode hammer, and house, and all the other concepts? Or do each of us do something different? And we can convert that into a machine learning question, right? We could say, well, what if we train on people on that side of the room. We'll collect their brain data and train our program. Then we'll collect data from these people and try to decode which word they're reading based on the patterns that we learned from those people. If that works, then that's overwhelming evidence that we have very similar neural encodings of different word meanings. So we tried that and, in fact, it works. In fact, here you see in black, the accuracies, just like on the first slide, of how well we can decode which word a person is reading in black, If we train on data from the same person we're testing on. But in white you see the accuracies we get if we train on no data at all from this person, but instead train on the data from all the other participants. And you see on average we do about as well with the white bars as we do with the black bars. In fact, in some cases we do better training on other people. That might be, for example, because we get to use more training examples. We get to use all the other participants' data instead of just one participant's data. But again, the important thing here is, this is very strong evidence that, even though we're all very different people, we have remarkably similar neural encodings when we think about common nouns. Which is something that really, say in the year 2000, I don't think anybody understood. So I want to kind of wrap up this idea. So I want to go through basically four ideas in this talk. Idea number one is, gee, we could train classifiers to try to decode from the neural activity which word a person is reading. And if we do that, then we can actually ask some interesting scientific questions, like are the patterns similar across our brains? Does it depend whether it's a picture or a word? And, in fact, we can think of this technique of training and classifier as-- the way I think of it is it's a way of building a virtual sensor of information content in the neural signal. So I think that fMRI was truly a revolution in the study of the brain, because for the first time we could look inside and see the activity. But I think these classifiers give us a different thing. Now we can look inside and see not just the neural activity, but the information encoded in that neural activity. And so it's a different kind of sensor. And you can design your own and train it, and then use it to study information represented in the neural signal in the brain. So it kind of opens up a very large set of methods, and techniques, and experiments that we can now run with brain imaging. Where instead of looking just at the activity, we now can look at the information content. OK, so that's idea number one. We were quite pleased with ourselves and we are doing this work. But in the back of back of our mind was kind of a gnawing question of, well, this is good, now maybe we've trained on a couple of hundred words, so we have a couple hundred different neural patterns of activity. We have kind of a list of the neural codes for a couple of hundred words, but that's not really a theory of neural encodings of meaning. It's a list. What would it mean to have a theory? Well, scientific theories are logical systems that can make predictions. And if they're interesting theories, they make experimentally testable predictions. So in our case, it would be nice, if we want to study representations of meaning, to have a theory where we could input an arbitrary noun and get it to predict for us what would be the neural representation for that non. At least that would be better than a list. That would be a generative theory or model. And so we're interested in this. And we worked on this for a while and came up with-- our first version of this looked like this. It's a computational model that was trained. And once it's trained, it would make a prediction for any input word, like telephone, in two steps. Step one, if you gave it a word like telephone, for example. Step one, it would look up the word telephone in a trillion words of text collected from the web and represent that word by a set of statistics about how telephone is used. In our case, statistics about which verbs co-occurred with that noun. And then in the second step, it would use that vector which approximates the meaning of the input noun as the basis for predicting in each of 20,000 locations in the brain, how much activity will there be there. So let me push on that a little bit. So I say in step one, we look up for a word like celery which verbs that occur with. Well, here are the statistics that we get. This is normalized to be a vector of length 1. But you can see for celery the most common verb is eat. And taste is second most common. But celery doesn't occur very often with ride. On the other hand, airplane occurs a lot with ride, and not very much with manipulate and rub. So these are the verb statistics extracted from the web for two typical nouns. And step one of the model was just to collect statistics for whatever now we give it to make the prediction. Step two is then to predict at each location in the brain what the neural activity will be there, the fMRI activity, as a function of those statistics we just collected. So for the word celery, now we know it occurs 0.84 with eat and 0.35 with the verb taste. We're now going to make a prediction of this voxel. In particular, the prediction that voxel v is the sum, over those 25 verbs that we're using, of how frequently verb i occurs with the input noun, celery in this case, times some coefficient that we have to learn from training. And this coefficient tells us how voxel v is influenced by co-occurring with verb i. And we have 25 verbs, 20,000 voxels, so we have 500,000 of these coefficients to learn. We learn them by taking nouns, collecting the brain-- the same data we use to train those classifiers. So we have a collection of nouns and the corresponding brain images. For each of those nouns we can look up the verbs statistics. And then we can train on that data to estimate all these half million coefficients. When you put those coefficients together, say, for eat, this is actually a plot of the coefficient values. Here's one of those coefficients for the verb eat in a particular voxel right there. So you can think of the coefficients associated with each verb as forming a kind of activity map for that verb. And a weighted linear sum of those verb-associated activity maps gives us a prediction for celery. You could ask, how well do these predictions work? One way I could answer that is to show you here, when we trained on 58 other nouns, not including celery, not including airplane. And then we had the system predict these novel, to it, words. Celery, it predicted this image. Airplane, it predicted this image. Unbeknownst to it, here are the actual observed images for celery and airplane. So you can see it correctly predicts some of this structure-- this is, by the way, fusiform gyrus-- but not all the structure. So it captures some of what's going on. I can, in a more quantitative way, tell you how well it's working by-- we can test the program this way. We can say, here are two words you have not seen. Here are two images you have not seen. One of them is celery, one is airplane. You, the program, tell me which. If it was just working at chance, it would get an accuracy of 50%. If you just guess randomly, you'll get half of those right by chance. In its case, averaged over nine different subjects in the experiment, we get 79% accuracy. So what does this mean? What this means is, three times out of four, 79%, we could give this trained model two new nouns that it has never seen, two fMRI images for those nouns, and it could tell us three times out of four which was which. So this model is extrapolating beyond the words on which it was trained. And it's extrapolating, not perfectly, but somewhat successfully to other nouns. Now, why? What's the basis on which it's doing that extrapolation? What are the assumptions built into this model? Well, for one thing, it's assuming that you can predict the neural representation of any word based on corpus statistics summarizing how that word is used on the web. Furthermore, it's assuming that any noun you can think of has a neural representation which lives in a 25-dimensional vector space, where each dimension corresponds to one of those 25 verbs. And every image is some point in this 25-dimensional vector space. That's what that linear equation is doing when it's combining some weighted combination of these 25 axes to predict the image. So, I don't actually believe that everything you think lives in a 25-dimensional space where the dimensions are those verbs. But the interesting thing is that the model works. And so it does mean that there is some more primitive set of meaning components out of which these neural patterns are being constructed. It's not just a big hash code where every word gets its own pattern. If that were the case, we wouldn't be able to extrapolate and predict new ones by adding together these different 25 components. So patterns are being built up out of more primitive semantic components. And this model is crudely, only 79%, capturing some of that substructure that gets combined when you think about an entire word. And the substructure are the different meaning components. The point here, I think, is, here's a model that's different from training a classifier. This is actually a generative model. It can make predictions that extrapolate beyond the training words on which it was trained. It is assuming that there is a space of semantic primitives out of which the patterns of neural activity are built. And it is assuming that that space is at least spanned by the corpus statistics of the noun. And since then, we've extended this work, and we no longer use just that list of 25 verbs. We actually use a very high 100-million-dimensional vector, which is generally very sparse, but where every feature comes from a much more precise parse of text on the web. And for example, when I say parse, I mean if we have a simple sentence like, he booked a ticket, this would be a dependency parse. It's showing, for example, that booked is a verb whose subject is he and whose direct object is ticket. And now each of these edges in the parse becomes a feature in our new representation of the word. So instead of using verbs, we use dependency parse features. And this actually increases slightly the accuracy of our former model from 79 up a little bit. But importantly, it also lets us work with all parts of speech. So now we're not restricted to just using nouns. We can use these dependency parse vectors for adjectives and all parts of speech. So in terms of broadening the model to be able to handle different types of words, this is helpful. So at this point you could say, well, this is kind of interesting, because what have we seen? I think the main points so far are, gee, different people have very similar patterns of neural activity that their brains use to encode meaning. Furthermore, those patterns of neural activity decompose into more primitive semantic components. And we can train models that extrapolate to new words on which they weren't trained by learning those more primitive semantic components and how to combine them for novel words based on corpus statistics. So that's kind of interesting. But everything that I've said so far is really about the static spatial distribution of neural activity that encodes these things. Now, in truth, your neural activity is not just one little snapshot. When you understand a word-- do you know how long it takes you to understand a word? About 400 milliseconds. It takes about 400 milliseconds to understand a word. Well, it turns out there is interesting brain activity dynamics during those 400 milliseconds. And let me show you. So up till now, we were looking at fMRI data. But here's some magnetoencephalography data. And this data has a time resolution of one millisecond. So I'll show you this movie which begins 20 milliseconds before a word appears on the screen. In this case, the word is the word hand. And this brain is about to read the word hand. You'll see 550 milliseconds of brain activity. I'll read out the numbers so you can just watch the activity over here. So here we go. 20 milliseconds before the word appears on the screen. 0, 100, 200 milliseconds, 300, 400 milliseconds, 500. OK, so it wasn't a static snapshot of activity. Your brain is doing a lot of things. There's a lot of dynamism during that 400 milliseconds that you're reading the word. fMRI captures an image about once a second, but because of the blood oxygen level dependent mechanism that it uses to capture that, it's kind of smeared out over time. So we can't see this dynamics with fMRI, but with MEG we can. And so now we can ask all kinds of interesting questions, like well, what was the information encoded in that movie that we just saw? I just showed you a movie of neural activity, but I want a movie of data flow in the brain. I want the movie showing me what information is encoded over time. Given this data, what could we do? Well, here's one thing we can do. In fact, Gus Sudre did this for his PhD thesis. He said, I want to know what information is flowing around the brain there, so I'm going to train roughly a million different classifiers. I'll train classifiers that look at just 100 milliseconds worth of that movie and look at just one of 70 or so anatomically defined brain regions. And I'll use a set of features-- he wasn't using our verbs anymore. He was using a set of 229 features that we had made up manually and that were inspired by the game 20 questions. These were features of the word, not like, how often does a court does it co-occur with the verb eat? But instead, features like, would you eat it? Yes or no. Is it bigger than a bread box? Yes or no. And so forth. He had a set of 218 questions like that. And every word could be described by a set of 218 answers to those questions, analogous to the verbs. And so what Gus did is, for every one of those features, every one of those 218 features like, is it bigger than a breadbox, he trained a classifier to try to decode the value of that for the word that you're reading from just 100 milliseconds worth of this movie, and looking at just one of 70 anatomically defined regions. And so when he did that, he ended up being able to make us a movie of what information is coded, in which part of the brain, when. And he ran this-- every 50 milliseconds he'd move forward and use a 100 millisecond window starting there. So he found that during the first 50 milliseconds after the word appears on the screen, none of those classifiers could reliably, in a cross validated way, produce any reliable predictions. Meaning the neural signals seems to not encode any of those semantic features during the first 50 milliseconds. By timing out to 100 milliseconds, there were no semantic features, but you could decode things like the number of letters in the word, the word length. At 150 milliseconds, at 200 milliseconds, you got the first semantic feature. Is it hairy? I think this is actually a stand-in for, is it alive? But the feature he happened to uncover was, is it hairy? At 200 milliseconds. At 250, now we start to see more semantic features. 300, 350, 400, 450. So literally, these are the semantic features trickling in over time during this 500 milliseconds-- that's the movie-- that corresponds to the neural activity that I showed you in that first movie. So this is a kind of data flow picture of what information is flowing around in the brain in that neural activity during that 450 milliseconds so far. Here's the set. Out of those 218 questions, here are the 20 most decodable features. So the number one feature that's most decodable, is that bigger than a loaf of bread? But actually, if you look at those questions, you see many of the most incredible ones are really size. And many of the next are manipulability. And many others are animacy. And some are shelter. In fact, we've across a diverse set of experiments keep seeing these kind of features. Size, manipulability, animacy, shelter, edibility are recurring as features that have their own-- they seem to be kind of naturally some of the primitive components. And they have their corresponding neural signatures, out of which the encoding of the full word is built. So if you ask me right now, what's my best guess of what are the semantic primitives out of which the neural codes are built, I'd say, I don't really know. But these features plus edibility, for example, keep recurring in what we're seeing. And they have their own spatial regions where the codes seem to live. OK, so I want to get to the final part, which is, so far we've talked about just single words. And there's plenty of interesting questions we can ask about single words. But really, language is about multiple words. And so I want to show you a couple of examples of some more recent work where we've been looking at semantic composition with the adjective-noun phrases. This is the work of Alona Fyshe. And what she did is she presented people with just simple adjective-noun sequences. She put an adjective on the screen like tasty, leave it there for half a second, then a noun like tomato. And she was interested in the question of, well, where and when is the neural encoding of these two words, and what does that encoding look like? So I'll show you a couple of things. One is, here is a picture of the classifier weights that were learned to decode the adjective. And you have to think of it this way. Here's time. And this is the time, the first 500 milliseconds when the adjectives on the screen. Then there's 300 milliseconds of dead air. Then 500 milliseconds when the noun is on the screen. And then more dead air. This, the vertical axis, are different locations in the sensor helmet of the MEG scanner. And there are about 306 of those. The intensity here is showing the weight of a trained classifier that was trained to decode the adjective. And, in fact, this is the pattern of activity associated with the adjective gentle. Like gentle bear. And so what you see here is that there is neural activity out here when the noun is on the screen long after the adjective has disappeared from the screen. That's quite relevant to decoding what the adjective was. And so this is just kind of a quick look. You can see that if I say tasty tomato, even when you're reading the word tomato, there's neural activity here, when you're looking at that noun, that encodes what the adjective had been. And we can see that, in fact, it's a different pattern of neural activity than was here when the adjective was on. And in fact, one thing that Alona got interested in is, given that you can decode across time what that adjective was, is your brain using the same neural encoding across time? Or is it a different neural encoding, maybe for different purposes across time. Let me explain what she did. She trained a classifier at one time in this time series of adjective-noun, and then she would test it at some other time point. And if you could train at this time, like let's say, right when the adjective comes on the screen, and use it successfully to decode the adjective way down here when the noun is on the screen, then we can know that it's the same neural encoding, because that's what it's doing. And then she made a plot, a two-dimensional plot, where you could plot, let's say, the time at which you train the classifier on the vertical axis, and the time at which you test it on the horizontal axis. And then we could show at each training test time whether you could train at this time and then decode at this time. And that'll tell us whether there's a stable neural encoding of the adjective meaning across time. When she did that, here's what it looks like. OK, so here we have on the vertical axis the time at which she trained. This is when the adjective is on the screen, the first 500 milliseconds, when the noun's on the screen. Here's then using any of these trained classifiers for decoding the adjective. Here's a different time at which she tried to use it. And again, here's when the adjective's on the screen, the noun. And so what you see-- all this intense stuff means high decoding accuracy-- shows that if you train when the adjective is on the screen, you can use that to decode other times at which the adjective's on the screen. That's good. So we can decode adjectives. But if you try to use it to decode the adjective when the noun's on the screen, it fails. Blue means failure. No statistically significant decoding accuracy. On the other hand, when the noun is on the screen if you train using the neural patterns when the nouns on the screen, then you can, in fact, decode what the adjective had been while the noun is on the screen. So it's like there are two different encodings of the adjective being used here. One when the adjective's on the screen that lets you successfully decode it when the adjective's on the screen, but it doesn't work when the noun's on the screen. And then the second one that works another neural encoding that you can use to decode what the adjective had been when the noun is on the screen. And then interestingly, there's also this other region here, which says if you train when the adjective was on the screen, you can't use that to successfully decode it when the noun's on the screen. But later on, when nothing is on the screen, the phrase is gone, your brain is still thinking about the adjective in a way that's using this neural encoding, the very first of those neural encodings. This is evidence that the neural encoding of the adjective that was present when you saw the adjective is re-emerging now a couple seconds later, after that thing is off the screen. But the neural encoding of the adjective when the noun was on the screen doesn't seem to get used again. Most recently, we've also been looking at stories and passages. And much of this, not all of it, is the work of Leila Wehbe, another PhD student. And here's what she did. She put people in fMRI and in MEG scanners, and she showed them the following kind of stimulus. So this goes on for about 40 minutes. One chapter of a Harry Potter story. And word by word, every 500 milliseconds, we know exactly when you've seen every word. So she collected this data in fMRI and in MEG to try to study the jumble of activity that goes on in your brain when you're reading not an isolated word, but a whole story. And so for her, with the fMRI we get an image every two seconds. So four words go by and we get an fMRI image. So here's the kind of data that she had. She trained a model that's very analogous to the very first generative model I talked about where we would input a word, code it with verbs, and then use that to predict neural activity. In her case, she took an approach where for every word, she would encode that word with a big feature vector. And that vector could summarize both the meaning of the individual word, but it also could have other features that capture the context or the various properties of the story at that point in time. But the general framework was to convert the time series of words into a time series of feature vectors that capture individual word meanings plus story content at that time, and then to use that to predict the fMRI and MEG activity. So when she did this, here are some of the kind of features that we ended up using. So some of there were like motions of the characters, like was there somebody flying-- this was the Harry Potter story. Somebody manipulating, or moving, or physically colliding. What were the emotions being experienced by the characters in the story that you're focused on at this point in time? What were the parts of speech of the different words and other syntactic features. What were the semantic content? We also used the dependency parse statistics that I mentioned that capture semantics of individual words. So altogether, she had a feature vector with about 200 features. Some manually annotated, some captured by corpus statistics. And for every word in the story we then had this feature vector. Then she trained this model that literally would take as input a sequence of words, convert that into the feature sequence, and then, using the trained regression, predict the time series of brain activity from those feature vectors. So this allowed her to then test, analogous to what we did with our single word noun generative model, to test to see, did the model learn well enough that we could give it to different passages, and then one real time series of observed data, and ask it to tell us which passage this person was reading. And these would be novel passages that were not part of the training data. And she found that it was, in fact, possible, imperfectly, but three times out of four, to take a passage which was not part of-- two passages which had never been seen in training, and a time series of neural activity never seen during training, and three times out of four, tell us which of those two passages they correspond to. So capturing some of the structure here. Interestingly, as a side effect of that, you end up with a map of different cortical regions and which of these 200 features are encoded in different cortical regions. So from one analysis of people reading this very complicated, complex story, in this analysis, we end up-- you can go [AUDIO OUT] features and color code. Some of them have to do with syntax, like part of speech and sentence length. Some have to do with dialogue, some have to do visual properties or characters in the stories. And you can see here is a map of where those different types of information were decodable from the neural activity. Interestingly, here is a slightly earlier piece of work, from Ev Fedorenko showing where there is neural activity that's selectively associated with language processing. The difference here is that in Leila's work, she was also able to indicate not just where the activity was, but what information is encoded there. And then again, you can drill down on some of these. If you want know more about syntax, we could actually look at the different syntax features and see, well, where's the part of speech encoded? What about the length of the sentence? What about the specific dependency role in the parse of the word that we're reading right now, and so forth. So this gives us a way then of starting to look simultaneously at very complex cognitive function, right? You're reading a story, you're perceiving the words, you're figuring out what part of speech, you're parsing the sentence. You're thinking about the plot, you're fitting this into the plot. You're feeling sorry for the hero who just had their brooms stolen, and all kinds of stuff going on in your head. Here's the analysis that attempts to simultaneously analyze a diverse range of these features, and I think with some success. There still remain problems of correlations between different features. And so it might be hard to know whether we're decoding the fact that somebody is being shouted at, versus the fact that their ears are hurting, so to speak. But there could be two different properties we thinking of that are highly correlated. And so it can still be hard to tease those apart. But I think that, to me, the interesting thing about Leila's analysis here is that it flips from a style that I would call reductionist. One way that people often study language in the brain is they pick one phenomena, and then run a carefully controlled experiment to just vary that one dimension. Like we'll use words, and we'll use letter strings that are pronounceable that are not words, and then words. And we'll just look at what's different in those two almost identical situations. Here, instead, we have people doing natural reading, doing a complex cognitive function, and try to use a multivariate analysis to simultaneously model all of those different functions. And so I think this is an interesting, methodologically, position to take. And it also gives us a chance to start looking at some of these phenomena in story reading. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_51_Vision_and_Language.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. BORIS KATZ: When a scientist though approaches a complex phenomena or when an engineer looks at a difficult problem, they usually break it up into pieces and try to understand this phenomena and solve them separately. Understanding intelligence is one such very, very complex, in fact, extraordinary complex problem, and over the years, this divide and conquer approach produced a number of very successful fields like computer vision, natural language processing, cognitive science, neuroscience, machine vision, and so on. But we need to remember the most cognitive tasks that humans perform actually go across modalities, which is they span these established fields. And the goal of our thrust is to bring together techniques from all these fields, create new models, and for solving intelligence task, we also would like to understand how these tasks operate in the brain. So I will start with one task, which is scene recognition. What does scene recognition involve? Well, machines, in order to recognize a scene, a machine needs to do some type of verification. Is that a street lamp? It uses the detection of other people in the scene. It needs to do identification. Is this particular building, Potala Palace, somewhere in Tibet? It needs to do object categorization. Look at this image and tell me where are mountains, trees, buildings, street lamps, vendors, people, and so forth. We should also be able to recognize activity. What is this person doing? Or what are these two guys doing here? Well, currently, our machines are pretty bad at all of these tasks. I understand that there has been quite a lot of progress recently made in machine learning. And I've also seen some claims that machines perform better than humans in some visual tasks. However, I think we should take these claims with a grain of salt. First, there is nothing amazing about machines doing certain things better than humans. People did it over millennia. Humans needed tools to build pyramids. They build tools to carry heavy things, to lift them, to go faster, and so on. Well, you'll tell me, no, no, no, we need, we are talking about intelligent tasks. Well, for $5 you can buy a calculator that multiplies numbers much better than you do. For $100, you can build a gadget that has a huge lookup table and plays chess much better than any of you. So when you hear that a computer can distinguish between 20 breeds of dogs, or something like that, better than you do, I don't think you should assume that the vision problem is solved. Well, understand, I'm not saying that because we have a dramatically better solution. Not at all. My point is that the problems of real visual understanding and real language understanding are extraordinary hard, and we need to be patient and try to understand why and eventually find better solutions. So back to visual understanding problem. So as I said, machines are better at these things. But humans are absolutely awesome. You have absolutely no trouble to do verification detection, identification, categorization. You could do much more than that. You could recognize spatial and temporal relationships between objects. You can do event recognition. You can explain things. You can look at that image that I showed you and tell me what past events caused the scene to look like it is. You can look at that scene and say what future events might occur in that scene. You can fill gaps. You can hallucinate things. You could tell me what objects which were included in that scene which I've barely seen, actually, there might be present there and what events not visible in the scene could have occurred. So why are machines are falling short? Well, in part, our visual system is tuned to process structures typically found in the world, but our machines have no idea. They don't know enough about the world and they don't know what structures and events make sense and typically happen in the world. So I will show you a blurry video. And I wonder if, well, some of you who didn't see it, whether you could figure out what's going on. Who have not seen this video? Could you tell me what you saw? AUDIENCE: A person was talking on the phone then switched to working on his computer BORIS KATZ: Right. Well, this is amazing. Even with almost no pixels there and you still recognize what's going on, well, because you know what typically happens in the world. Well, of course it was sort of a joke and people sometimes make mistakes, too. So here's the unblurred video. [AUDIENCE LAUGHTER] Well, but, all jokes aside. AUDIENCE: [INAUDIBLE] [AUDIENCE LAUGHTER] BORIS KATZ: Jokes aside, though, it would have been extraordinary if our machines could make mistakes like this. Not even close. Well, so we may want to ask ourselves questions. How is this knowledge that you seem to have is obtained? And how can we pass this knowledge to our computers? How can we determine whether this computer knowledge is correct? Our partial answer is using language. And I bolded the word partial here, because clearly there are many other ways humans obtain knowledge about the world, but today I will be talking about language and show you what is needed to give knowledge to the machine. So we have a proposal. We would like to create a knowledge base that contains descriptions of objects, their properties, the relations between them as they're typically found in the world, and we want to make this knowledge base available to a scene-recognition system. And to test the performance of the system, we will ask natural language questions. One of us here has decided to see what questions people actually ask, and he set up an Amazon Turk experiment, where he showed people hundreds of images and asked them to write down, generate questions about these images. And I will show you a couple. Here's a scene. In here, the questions that people ask, you know, how many men in the picture? What's in the cart? What does the number on the sign say? Is there any luggage? What is the color of shirt of some lady? Another example. Who is winning, yellow or red? Well, the answer is, of course, red, but how did you do that? Well, you need to know that this is a sporting event, which all of you do. That it involves, in this particular sporting event, winners and losers, that you do. It needs to know that this sort of short-hand yellow and the red means people wearing these colors, rather than color themselves. You need to know to pay no attention to people wearing red or maybe blue in the audience. And you also need to know that a participant on the floor is likely a loser, not a winner. That's a lot of knowledge. So, back to our proposal. We want to try to give at least some of that knowledge using language, and for that, of course, we need tools. And over the years, we've built a system called START, which, in fact, contains some tools that could be helpful for this task. And I will be happy to share the API with you so that you could use the system and maybe try to see what to do with the knowledge that you give it. So there are only three tools on this slide. So one is going from language to structure. So we know that to provide machines with knowledge, we give the machine a bunch of sentences, texts, paragraphs, and that will be converted into some kind of semantic representation. And I will show some details of that. We want to go the other direction. We want a machine to explain what it does or describe its knowledge using language. So we have a generator that does that, that goes from semantic representation to language. And we want to test the machine, because it's very important to know whether what you thought the machine, if actually it understood you correctly. We want to ask questions, give queries. Those will be converted in semantic representations that will be matched against what the machine knows. And the computer will either give you a language response or perform some actions, which will indicate that it understood what you asked. So we will go through these tools and I will describe you the START system in detail, but I just want you to remember that this is a discipline and engineering enterprise and these are the tools that I want to give you and other people so that you could start thinking deeper about human abilities and modalities, like vision and language and others. Some of the building blocks of the START system. We need to parse language, we need to come up with semantic representation, generate match, reply, and so forth. So let's very quickly go through that. Most of you, somewhere in middle school, learned about parse trees. Linguists love them. They're beautiful. This is an example of a sentence from Tom Sawyer. Tom greeted his aunt who was sitting by an open window in a pleasant rearward apartment. And linguists like to argue about exactly the representations, but pretty much most of them will agree that something like that represents the structure, the syntactic structure of the sentence. Well, they're beautiful and nice, but they're really horrible if you want to store them, if you want to match them, if you want to retrieve from them. And so we use the information found in parse trees, but we developed a different representation which we call Ternary expression representation, which is a more semantic representation of language. It is syntax-driven, but it highlights semantic relations which humans find important, and it also, because we give this knowledge to computers, we made it very efficient for indexing, for matching, and for retrieval. It's also reversible, and I'll explain in a second what I mean by that. We implemented it as a nested subject, relation, object set of tuples. So here is a different sentence. Say you have an image. You may recognize one of the characters there in the back. It's Andrei Barbu. And say you want to describe, in language, what you see here. And you say something like, the person who picked up the yellow lemon placed it in the bowl on the table. Using this structure, subject, relation, object structure, you could, and first parsing the sentence, you could create this Ternary expression. And you could see, person picked up lemon. That same person placed that lemon in the bowl on the table, and that lemon happens to be yellow. To make it a little bit more easy for you to see what's going on and convenient for humans and machines, we created a sort of topologically equivalent linearization of that graph, that knowledge graph, and as a set of triples. They're a little bit misleading here just due to its simplicity, but all words here, of course, need to have an index, because if you have, say, a tall person and a short person or, then, you will have to distinguish between them. So you need indices, but for simplicity, I didn't show them here. And then the verbs also have indices so that when you use the same word place here, that is the relation, the triple, that happened to be in the bowl, that the person placed a lemon in the bowl. So this is all representation and when we distinguished at least three types of Ternary expressions. So the first type you see here, syntactic structure of a sentence. We also have syntactic features. The fact that the sitting was, it was past tense. She was sitting, it said here. But it's a progressive tense, was sitting, and also what kind of article things have. The window had an article and an indefinite article. And also the lexical features that don't change from sentence to sentence, the fact that Tom is a proper noun and so forth. Well, I told you that our representation is reversible. Well, we need to be able to teach machines to talk to us. And there are many reasons to do that. Some of them are shown on this slide. We want your robot or your computer to explain what it does somewhat remotely. You want your machine or your robot to answer questions which are complex and the robot may want to ask you about it for clarification. You want to keep track of conversation history and state. Engage in mixed-initiative dialogue. Offer related information. So all these things need to happen in the dialogue and, therefore, your computer must be able to speak to you in a language that you understand. In fact, I find that the biggest problem with learning systems that we have today is that some of the work can work quite robustly, and sometimes give you good results, but you have no idea why. You press a button, you say, aha, here's the number, and put it in the paper, but more recently, people started looking at why it does what it does. But, again, it's done by numbers. It would be really wonderful if our learning system could tell us why they came up with their conclusions. So we need language and so we created, sort of built a START generator that goes from those same expressions and create natural language. This is why we call this representation reversable. So given a set of Ternary expressions, the machine will create a sentence, the person who picked out the yellow lemon placed it in the bowl on the table. But, of course, this is a little bit silly, just parroting the same sentence back. You want the machine, for example, as I said, to ask you a question or indicate a negative statement or rephrase it from different pieces that it knows about. So, in fact, our generator is very flexible. So here is an example where by, say, observing the world, robot adds more information to this representation, and they're indicated here in blue. And now from the original sentence about, which was the person who picked out the yellow lemon, placed it in the bowl, the machine, by just adding a couple of new relations, the generator will be able to ask a question, or the human. Will the person who placed the yellow lemon in the bowl on the table pick it up soon? And so forth. All right, so we talked about parse trees, about semantic representation, about generation. So what do we do with that? Let's-- Supposing that you gave some knowledge to your machine, hear that Tom Sawyer assertion, and somebody asked you, was anyone sitting by an open window? Well, what needs to happen is this question needs to be converted into Ternary representation as I indicated, and this knowledge base, we assume, had the knowledge from the original assertion plus million other assertions, of course. So we would need to match the representation of the query with the knowledge base, and the machine will say, aha, here is the match. Well, this is very simple here, if you think window needs to open to window, open to match window open, open, and sit and sit, and then the word anyone, which sort of matching the word, needs to match aunt. But, in reality, of course, people ask questions which do not that closely follow what the machine knows, and so for our matcher, it needs to be much more sophisticated. This is just the graphical effect, sort of knowledge, graph representation of that match. So START distinguishes things like term matching, which as I showed, could be a lexical match. Of course, it knows synonym. It knows hyponymy when it goes one way. It's like a car is a vehicle. And as you can imagine, the match also needs to go one way. If I say I bought a car, it also means that I bought a vehicle. But if I say I bought a vehicle, it's not true that I bought a car because I may have bought a truck, but this aside. So it's pretty easy to do matching on the level of terms of words, but a much more complex problem is to match on the level of the structure. And I will show you some examples of this problem. But by now, you must have figured out I love to stare at English sentences. I hope you do a little bit. If not, please try. So here, let's consider a couple of verbs. So here is the verb surprise. Let's consider these two sentences. The patient surprised the doctor with his fast recovery. The patient's fast recovery surprised the doctor. Now for you who are used to understanding language so quickly, it's even hard to hear what's different about the sentences. But if you actually do the parsing, you will see that the parse trees are dramatically different, and therefore, our Ternary representations will be very different. So we need to find a way to tell the machine, yeah, that's the same thing. And the linguists call these things syntactic alternations as well. In a different verb like load, here's a different alternation. The crane loaded the ship with containers or the crane loaded containers onto the ship. Again, means the same thing pretty much, but the surface representation is different. The next one is in terms of a question. Did Iran provide Syria with weapons or did Iran provide weapons to Syria? Let's see if it's true for every verb in the universe. So let's try to look at the, say, surprise alternation and use it with a load verb or the other way around. So I hope you're bearing with me. So linguists put stars in front of bad sentences. And so here I tried to use the word surprise without alternation, which load allows you to do. Here, you do the same onto, and it says the patient surprised fast recovery onto the doctor. It makes absolutely no sense. Here I tried to do surprise alternation for the word load. And it says the crane's containers loaded the ship. Again, complete gibberish. And the same below. Did Iran's weapons provide Syria? So it looks like a really horrible story. Every English verb, it looks like, has a different way of saying, expressing these alternations, but fortunately, this is not the case. Let's go back to the verb surprise. Well, let's look at verbs similar to the word surprise. So you could use the same alternation with the word confused. You can say the patient confused the doctor with slow recovery, which will convert into the patient's slow recovery confused the doctor. You can say the same thing with anger, disappoint, embarrass, fight, and impress, please, threaten. And what is really amazing about it and very interesting that this syntactic alternation works the same way for verbs of the same meaning, of the same semantic class. And this particular class is called the emotional reaction verbs. And it's a large semantic class of about 300 verbs, and they all behave identically from the point of view of that alternations. And it's true for all other alternations that I showed you. So that's, of course, is good news because it makes an interesting connection between syntax and semantics, but it also allows to build lexicons that are more compact and easier to deal with. And one can imagine creating this verb membership automatically by looking at large corpus. And this is how, presumably, children learn these verb classes and these alternations. All right, so now that you know how to match and it's not just trivial match only, like I showed here, but a more sophisticated match on the level of structure as well, let's see what we can do after the match happens. So here's the same sentence and the same question. Was anybody sitting by an open window? We retrieved the structure and then we could tell our generator, go and generate the sentence, and it will do that. Tom's aunt was sitting by an open window in a pleasant rearward apartment. Well, it's not as interesting. It's sort of parroting the same thing I tell it, ABC, and it told me back, ABC, if I ask who BC or something like that. If you want to build a question-answering system, we want it to be able to, in response to a question, understand it, go somewhere, find the right answer, and give it back to you. And we build that and we do it by a general, where, if looking at it, our system can execute a procedure in response to a match to obtain the answer from the data source. So an example is here and I can show you some screenshots or, in fact, if you like it, we can play with the system live and you'll see what it does. So it executes a procedure to obtain an answer from the data source. If you say who directed Gone With the Wind, from match, it will happen between what you know, what you ask, and what the system knows, some script will get executed and the machine will go to some data source, find the answer, and give it back to you. So how this is done? Well, in order to explain that, I need two more ideas. One is a natural language annotation idea. So annotations, sentences, and phrases that describe the content of retrievable information segments, this is a graphic sort of, in a cute way, show this sentence level, or phrase level labels on some data, and they describe the retrievable information segments. Annotations are then matched against submitted queries and the successful match results in, either retrieval of that information or some procedure, to retrieve that information. And the special case of this procedure is done using our object-property-value data model. This technique can connect language to arbitrary procedures, but as I said, let's consider a lot of semi-structured information sources available on the web that can be modeled using this object-property-value model. Well, what are these, what kind of repositories have this property? If you think about it, almost anything that humans create on the web, which is semi-structured, is like that. If you have a site that has a bunch of countries with properties like populations, areas capitals, birthrates, and so forth, the country is an object, the word population is a property, and value is the actual value of that property. You can have people with their birth-dates, you can have cities with maps and elevations and so forth. So in a sense, this object-property-value model makes it possible to view and use large segments of the web as a database. And schematically, here's how START uses this model. A user asks the language part of the system a question, the system needs to understand the question, understands where the question might be found, and what is the object and property implicit in the question. After START does that, it says it has a friend called Omnibase, and it says, go get it. Go to this site. Go for that symbol called France and go get the population. And it will go to some world fact book and get the population. So this is how the system works and here's an example of such a question. Here, the question is, it's a screenshot, does Russia border on Moldova? The system says, ah ha, you want to find out what countries border Moldova, and find out whether Russia is among them. And then it actually checks that and it tells you, no, Russia does not border Moldova, because it doesn't find Russia in this response. And just for comparison, if you ask the same question of a search engine, it will give you 24 million results, today maybe 240 million results, and none of them really answer the question. Well, I just want to tell you that the ability to understand something really helps. In this case, the ability to understand language gives you a lot of power. You can do a lot by trying through the keywords and you can retrieve a lot of documents, and this is how all of, pretty much, modern systems work. But if you want to do something a little bit more complex, it would be nice to understand some. So here's an example of a complex question. Who is the president of the fourth largest country married to? Well, if you can analyze this question into pieces, then you can very quickly figure out that, right away, just throwing the pieces on your knowledge base, you cannot resolve it. But we've built a very nice syntatical-based algorithm that allows us to resolve the complex questions into subsets of simpler questions and understand in which order to ask them. So the machine will say, oh, first I need to find out what's the fourth largest country, then who its president is, and then with that, who he is married to. And, very quickly, this is how, schematically, it's done. This is sort of an under the hood Ternary expression representation of the question. The machine says, oh, too hard, let's first find out what the fourth largest country is. It's China. Then let's find out, it's still hard, so let's find out who the president of China is, found the name, and then the next is just a lookup table. Who is he married to? And it gives you an answer. Some other examples. In what city was the fifth president of the US born? And finds like a James Monroe and gives you the city. What books did the author of War and Peace write? Finds Leo Tolstoy and finds his books from different sources. So the technology that I described, object-property-value data model, our Ternary expression representation, complex question answering, the annotation, natural language annotation, representation. They, over the years, inspired a bunch of companies and a bunch of technologies, starting with Ask Jeeves, who, I guess, existed before you guys were even born, to Wolfram Alpha, who pretty much took [INAUDIBLE] wholesale, to more recently, Google QA started doing really wonderful things using this idea that, everybody had the idea that you should go from surface to a question. If you have a question, you throw your question onto the web and you get some answer, but it doesn't work with high precision. So the idea that you need to curate knowledge and build some huge depository of knowledge was picked up by these companies to, certainly now, all these companies do quite decent question answering. And the same is true for Watson and Siri, and I was involved in some of these things, so I will show you. Let's see. Right, so let's start with this. About 10 years ago, we, on top of START, we built a system that was connected to a cell phone. I don't know how many of you remember the world without smartphones, but that was when smartphones wasn't there. There was no such thing as iPhone. So there's a vanilla phone that, all it did, it made phone calls. Of course, it also had a camera and unlimited text. But it really didn't do much more and we decided it's time to connect it to language. So we convinced the company to fund us to do that, and we build some language called StartMobile. And this is an intelligent phone assistant, which could, at the time, retrieve general purpose information, provide access to computational services, perform action on another phone, trigger apparatus, like a camera on a phone, receive instructions. And we have a, talking about YouTube, we have a video that shows the system in action. That video is quite old. It's from beginning of 2006, and at the time, we did not connect to speech, but you could see what it does by, the user was typing in questions for the system in that particular video. So there's no narration so if you read the captions, you'll figure out what is going on. So here's my former student, who actually went to Google to transition our technology eventually. And he's not sure whether he needs to take his coat or not. [JAZZ MUSIC] Again, this is very dated, of course, because now the temperature is almost uniform, but, again, that was 10 years ago. Is there any sound? AUDIENCE: Yes. BORIS KATZ: All right, those of you that know Cambridge know that station. Where am I? The GPS just came about and we were lucky to connect it and so now the guy gets the map and knows where to find where he needs to go. So this is our data center for those who haven't seen it. This is where CSAIL is. Again, it's dated, because right now this lawn is a huge building, but it wasn't there at the time. Oh it says here, trying to reach my mother. I don't know why it shows you this stuff, but. AUDIENCE: [INAUDIBLE] BORIS KATZ: He's worried about his mother and so he decides to tell her, remind my mother to take her medicine at 3:00 p.m. And we'll see what happens with that later. Take a higher resolution picture using flash in 10 seconds. I don't think any phones can do it even today for some reason. I don't know why it's so hard. [AUDIENCE LAUGHS] AUDIENCE: For a selfie. BORIS KATZ: Right, for a selfie. Very good, yeah. All right, so his friend is busy and he wants to entertain himself, I guess, but he doesn't quite know how to do it. How do I use the radio on my phone? Well. All right, now he knows. [MUSIC PLAYING] All right, mother's health. [AUDIENCE LAUGHS] Right, so, exactly. So a delayed accident happened on her phone so we inserted the thing on her phone and then she got this warning, and that's my stuff. They all turned out to be very good actors. All right, so this is the last thing. Traveling, she is going back. She now has a car. And this is the last thing that I'll show you. [JAZZ MUSIC] How do I get from here to Frederica's house? Well, if you think about it, this is a very hard question. You need to know that here is here and go to GPS and find the location. You need to know Frederica's house from your list of contacts that need to go there, and you need to then send it to, in that case, we send it to, I believe it was MapQuest, I don't even know if it exists now, to actually give the directions. Well, anyway, so. Well, it was a little bit of a sad story actually. So we built the system. The company that I mentioned was Nokia. We showed them the demo. They were very excited about it. They said, well, can we put START on the phone? Because in that application, the signals were sent to MIT from the phone and the answers were sending back to the phone. I said, well, it doesn't seem right. START is large and there was no internet that could take care of that. They said, no, no, no, how big is your system? Can you talk through the company to put it to a LISP compiler, to put it on our chip, and so forth. We need another phone. Unfortunately, the word cloud hadn't been invented yet. Maybe I would have been more eloquent in explaining to them why they don't need to have the system on the phone. And so they didn't want to use that the way it is. We wrote a paper, showed it to them, say, do something about this, it will be too late. And right at that time, Apple released its first iPhone. So I go to like senior vice president and say, look, these guys are ahead of you. You should decide about it because they will do what I gave you. He asking, how many iPhones did Apple sell last month? I said I read somewhere like a couple of thousand. And he starts laughing hysterically. He said, we, my company, ships one million phones every day. Why do we care about Apple, he said. Well, so, we gave this talk. That was September 2007 by then. In December, somebody started a company called Siri, and then two years later, Siri was bought by Apple and the rest is history. And Nokia was sold pretty much on a yard sale and doesn't exist anymore. So be visionary. Don't think that you know what you are doing all the time. Yeah, people often ask me to say a few words about Jeopardy!. The question that the IBM team was hoping to answer was actually a very important question. Can we create a computer system to compete against the best human in a task, which is normally thought to require high level of human intelligence? I was involved with them from the very beginning for various reasons which I will not go into. They put together a wonderful team, some really good people, very devoted people. They spent four or five years of their life, pretty much, totally devoted to that. And they built a system, and these are the kind of, I guess-- I don't know if any of you know what Jeopardy! is but pretty much, people ask the question, which for various reasons, is formulated not as a question but as an assertion, with the most to reflect these that are pronouns and you need to give an answer, which they call the question, for some reason. There's a gimmick. You have to say what is envelope instead of envelope, but let's not pay attention to that. It doesn't matter. And this is very hard. To push one of these paper products is to stretch the established limits, and you need to figure out that to push the envelopes means stretch established limits. This is a idiom for those of you who are not native. And the answer is envelope. A simple question is the chapels of these colleges were designed by this architect? And you need to figure out that Christopher Wren is the answer. Of course, many questions involve a question decomposition. So here's an example of a real question. Of the four countries in the world that the US does not have diplomatic relations with, the one that's farthest north? So it's pretty much asking several questions. One is the sort of inner sub-question. The four countries in the world that the US doesn't have relations with, and the outer sub-question is, now that you know the answer of these four countries, which is the farthest north? You do a little bit of arithmetics and you find the answer is North Korea. And of course, this is very similar to what started years before, is pretty much you decompose the question and you solve them separately, as I showed you a few minutes ago. So Watson actually took a bunch of ideas from START, the Ternary expression representation, the natural language annotations idea, the object-property-value data model, and the question decomposition model, and applied them when they could really analyze syntactically the question where the question was not too convoluted, and when there was a semi-structured resource, or several resources, to find an answer. But many questions, of course, were not like that, and stretching the envelope is one example. And so Watson used some statistical machine learning approaches and they did quite a good job of looking at a lot of data to resolve and answer these questions. Their pipeline is, really, miles long, because each of these bullets has three, which are a bunch of bullets with a bunch of bullets that the task that they were doing, but on a very high level, they needed to content acquisition. Pretty much, all right, there was a problem. The company, Jeopardy!, told them the web cannot be part of that because everything that Google knows everything. So what is it that you guys are doing? So they had, so what would you do if somebody tells you you cannot use the word? AUDIENCE: [INAUDIBLE] BORIS KATZ: What's that? AUDIENCE: [INAUDIBLE] BORIS KATZ: Well, you pretty much bring the web and put it in a box. And this is what IBM did. They took every repository, interesting repository, every database, every encyclopedia, every newspaper collection, I forgot where the blogs existed at the time, and just had a lot of clusters and a lot of memory and everything was there. So now they could tell the company no web. We are smart without the web. So that was the first thing. Then, they actually, there were some wonderful natural language processing people do, so they did question answering, they searched the documents. So what they really did, they took the clue, they call it, the question, throw it on not the web but on their web, find tens of thousands of documents that even closely match these keywords, and then the real work just started. They had this kind of filtering, that kind of filtering, this kind of answer generation, that kind of. They would score it, they will pay new evidence, they will do it again, they will do ranking, and they will decide how confident they are about that and then they will decide whether it's worth how much they spent, incredible amount of time figuring out how much money. I don't actually know much about Jeopardy!, but apparently you have to tell them how good you answered. Well, you need to come up with a number of how expensive you are, how much you will make if you win, and how much you lose if you lose. And so they did it all and they built a wonderful system. In the beginning, when I started going there, that was very, very slow. It ran on a single processor and really took two hours to do this pipeline. But that, if you think about it, it's a very plausible problem. You could send it all to, in that case, I think by the end it was like several thousand cores, and they easily reduced the time to three seconds, which was passable and doable for the competition. And so they won, as you all know. It's a great system that nails the state of the art in natural language, in QA, in information retrieval, machine learning. It's a great piece of engineering. It reignited, no doubts about it, public interest in AI. It brought new talented people in our field. So this is all great news. But let's look at some of the sort of blunders that occurred, well, both before the competition and after. I had a whole bunch, a collection of those. I'll just show you a couple. This actually happened before the competition so they figured the problem. So the question was, again, it's called a clue, in a category of letters, in the late 40s, a mother wrote to this artist that his picture, number nine, looked like her son's finger paintings. Well, for those who are quick at that, I'm sure you know that it's Jackson Pollock, but Watson sent Rembrandt for some stupid reasons. It failed to recognize that late 40s referred to 1940s, or rather they thought it's made in previous centuries, and apparently, it has something to do with number nine. That was in a bunch of documents related to Rembrandt and so it said Rembrandt. Another more famous blunder, because that happened at the competition, the category was US city, and the question was its, which is a series, largest airport is named for a World War II hero, and its second largest for a World War II battle. And again, those of you quick at that will know that in Chicago there is this O'Hare Airport and he happened to be famous here from the world. And the second airport called Midway, which is a famous battle in the Second World War. And Watson presses the button and says Toronto. And there's a sort of gasp in the audience, and also like tens of millions or hundreds of millions television sets around the world. And again, there are some stupid reasons. Watson did machine learning, as I said, and it statistically figured out the category part of the clue, which in this case, since this is a US city, it might not be that important, so we should pay less attention to it. It also knew that Toronto team playing in baseball-- is that true? Yes. In the US baseball league, and that, in fact, one of Toronto's airport is named for a hero. Although, it's a World War I hero. So it put it all together and said Toronto. In any case, it won anyway because it did an amazing job on answering many more questions, but the question to us is whether this is what we should all be striving for. I'm certainly all in favor of building awesome systems, and they did, and I explained to you why I think it's good, but IBM has not created a machine that thinks like us. And what's in success didn't bring us even an inch closer to understanding human intelligence. And the positive news, of course, is that there was those blunders, should remind us that the problem is waiting to be solved and you guys are in good positions to try to do that. And that should be our next big challenge. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_86_iCub_Team_Overview_of_Research_on_the_iCub_Robot.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. CARLO CILIBERTO: Good morning. So today, there is a bit if a overview on the iCub Robot, and it will be about one hour, one hour and half. And we organized the schedule for this time in series of four small talks and then a demo from the iCub, live demo. So I will give you an overview of the kind of fields and capabilities that the iCub has developed so far, while Alessandro, Rafaello, and Giulia will show you what's going on right now on the iCub, part of what's going on the robot. So they are going to talk about their recent work. So let's start with the presentation. This is the iCub, which is a child humanoid robot. This size. And the project of the iCub began in 2004, and the iCub-- actually, the iCubs, because there are many of them-- they were built in Genoa at the Italian Institute of Technology. And the main motivation behind the creation and the design of this platform was to actually have a way to have a platform in order to study how intelligence, how cognition emerges in artificial embodied systems. So Giulio Sandini and Giorgio Metta, that you can see there, are the actual original founders of iCub world, so they are both directors at the IIT. And this is a bit of a timeline that I drew. Actually, there are many other things going on during all these 11 years. This is a video celebrating the actual first 10 years of the project. And actually, you can see many more things that the iCub will be able to do. But I selected this part because I think they can be useful, also, if you are interested in doing projects with the robot, to have an idea of the kind of skills and the kind of feedback that the robot can provide you, to do an experiment. So as I told you, iCub was built with the idea of replicating an artificial embodied system that could explore the environment and learn from it. So it has many different sensors. These are some of them. So in the head, we have one accelerometer and the gyroscope in order to provide the inertial feedback to the system, and two Dragonfly cameras that provide medium resolution images. And as you can see there, it's all about one meter, one meter and something, and it's pretty light-- 55 kilograms-- and has a lot of degrees of freedom. So 53 degrees of freedom, and they allow the robot to perform many complicated actions. It's provided with torque and force sensors, and I will go over these in a minute. Its whole body, or at least the covered part of the robot-- the black part that you can see there-- are all covered in artificial skin. So they provide feedback about contact with the external world. And it has also microphones mounted on the head, but probably for sound and speech recognition is better to use direct microfeedback at the moment, because, of course, noise-canceling problems and so on. If you're interested in speech and found feedback, we are going to use other kind of microphone. So during these 11 years, iCub has been involved in many, many projects, and indeed, part of what I'm going to show you is the result of the joint effort of many labs, mainly in Europe. These are mostly European projects. But iCub is also an international partner of the CBMM Project. So regarding force/torque sensors, they are these sensors that you can see there. So they provide [INAUDIBLE] and also [INAUDIBLE].. And they're mounted in each of the limbs of the robot and the torso. And they allow the robot to [INAUDIBLE] interaction with the world. And indeed, with this kind of object, it can do many different things. For instance in this video, I'm showing an example of how the feedback provided by the force/torque sensor can be used to guide the robot and teach it to learn different kind of action. Like, in this case, a pouring action and then repeat it and maybe try to generalize it. So force/torque sensors provide the robot feedback about the intensity of the interaction with the external world. But they're not allowed to have the robot have an idea of where this kind of interaction is occurring. So, for this, we have artificial skin covering the robot, as I told you. And the technology used for the kind of thing, that you can see here, for the palm of the hand of the robot, is capacitive and it's similar to the technology used for smart phones. It's using, if you can see these yellow dots. They are all electrodes that, together with another arm, they form a capacitor. The way the arm works of this capacitor are formed when there is an interaction with the environment allowed to provide the feedback of the kind of intensity [INAUDIBLE] the location itself. It's providing information about where this interaction is occurring. And the artifical skin is actually really useful for an embodied agent. And for reasons like we see in this video, without using this feedback, if you have a very light object, the robot is not able to detect the process is interrupting with something. It's just closing the end and it doesn't have any feedback. It's crushing the object. By using the sensors on the fingertips of the hand, the robot is able to detect that it's actually touching something, and therefore, it's stopping the action without crushing the object. So other useful things that can be done with artificial skin. This is an example of combining the information from the force/torque sensor and the artificial skin. So the artificial skin allows to detect where and the direction of the kind of force that is applied to the robot. And in that case, the robot is counterbalancing. It's negating the effect of gravity and of internal forces. So it's basically having arm floating around like if it was in space. And there is no friction. So by touching the arm of the robot, we have the arm drifting in the direction opposite to where the force is applied, or the torque. You can see that, the arm is turning. But as you can see, like it was in space without any friction, because it's actually negating both gravity and internal forces. And finally, again about the artificial skin, there's some work by Alessandro, but he's going to talk about something else, but I found is particularly interesting to show him. This is an example of the robot self-calibrating the model of its own body, with respect to its own hand. The idea is to have the robot use the tactile feedback from the fingertip and the skin of its forearm, for instance, to learn the position between the fingertip and the arm. And therefore, it's able, first by touching itself, to learn the correspondence and then to actually show that it has learned this kind of correlation by reaching the point when someone else touches the thing point. And tries to reach it. And therefore, this can be seen as a way of self-calibrating without the need of a model of the kinematics of the robot. The robot would just be able to explore itself and learn how different part of it's body relate one to the other. And again, related to self-calibration sometimes, but this is more of a calibration between vision and motor activity. This is a work appeared in 2014 in which, basically, the correlation between the kind of actions that the robot is able to perform is calibrated with respect to its ability to perceive the work. So, in this kind of video I'm going to show, the robot is trying to reach for an object and failing in its action because the actual model of the word that you use to perform the reaching is not aligned with the 3-D model provided by vision. And this can happen due to smaller errors in the kinematics or in the vision and therefore even just small errors cause a complete failure of the system. And therefore, by trying to correlate that information, the one from the kinematics, in this case the robot is using it on the fingertip to the see where it actually is in the image. And when the kinematic model predicts the hand should be. So the green dot is the point, predicted by kinematics, of where the system expects the fingertip to be, and where actually the fingertip is. Of course, by learning this relation, the robot is able to cope with this kind of misalignment. And therefore, after a first calibration phase, it's able to perform the reaching action, successfully, as you will see in a moment. Also, this kind of ability of calibrating would be pretty useful in case of the situations in which the robot is damaged, and therefore, it's actual model changes completely. As you can see, now it's reaching and it's performing the grasp correctly. Finally, before going with the actual talk, I'm going to show a final video about balancing. Some of you have asked if the robot works, and the robot is currently not able to work, but this is a video from the people from the group that is actually in charge of making the robot work. The first step, of course, will be balancing. And this is an example of it. It's actually one foot balancing where multiple components of what I've shown you about the iCub so far. Torque sensing, initial sensing are combined together to have the robot stand on one foot. And also be able to cope with the interaction with internal forces that could try to have it fall. They are applying forces to the robot and it's able to detect the force and to cope a bit with the forces but to stay stable. OK so, this was just a brief overview of some things that can be done with the iCub. And actually, the next talk will be a bit more about what's going on with it. ALESSANDRO RONCONE: I want to talk to you about part of my PhD project that was about tackling the perception problem. Tackling the perception problem through the use of multisensor integration. And specifically, I narrowed down this big problem by implementing a model of PeriPersonal Space on the iCub. That is, biology inspired approach. PeriPersonal Space is a concept that has been known in neuroscience and psychology for years. And so, let me start with what PeriPersonal Space is, and why it is so important for humans and animals. It is defined as the space around us, within which objects can be grasped and manipulated. It's an interface, basically, between our body and the external world. And for this reason, it benefits from a multimodal integrated representation that merges information between different modalities. And historically, these have been the vision system, the tactile system, the perception, auditory system, and even the motor system. Historically, it has been studied by two different fields. The neurophysiology on one side, and all that could be related to psychology and developmental psychology on the other. They basically follow the two different approaches, the former being bottom up, the latter being top down. And they came out with different outcomes. And the former emphasizes the role of the perception and it interplays with the motor system in the control of movement, whereas the latter was focusing mainly on the multisensory aspect, that is, how different modalities were combined together in order to form a coherent view of the body and the nearby space. Luckily, in recent years, they decided to converge to a common ground, and a shared interpretation, and for the purposes of my work I would like to highlight the main aspect. Firstly, and this one might be of interest from an engineering perspective. PeriPersonal Space is made of different reference points that are located in different regions of the brain. And there might be a way for the brain to switch from one to another, according to different contact and goal. And secondly, as I was saying, PeriPersonal Space benefits from a multisensory integration in order to form a coherent view of the body understanding space. In this experiment made by Fogassi in 1996, they basically found a number of so-called visual tactile neurons that are set up neurons that fire both stimulated in a specific skin part and if an object is presented in surrounding space. So this means that these neurons code both the visual information and the tactile information. But they also have some proprioceptive information, because they are basically attached to the body part that they belong to. Lastly, one of the main properties of this presentation is in basic plasticity. And for example, in this experiment, made by Iriki ten years ago, the extension of this receptive field in the visual space, in the surrounding space, after training with a rake, have been shown to go up to enclose the tool as if this tool becomes part of the body. So through experience and through tool use, the monkey was able to grow this receptive field. Those are properties that are very nice, and we would like them to be available for the robot. And, in general robotics, the work related to PeriPersonal Space can be divided into two groups. On the one side, the model, and the simulation basically. The closest one to my work was the one from Fuke, a colleague that are from [INAUDIBLE] lab, in which they used a stimulated robot in order to model the mechanisms that are leading to this visual-tactile presentation. On the other side, there are the engineering approaches that are few. The closest one is this one by Mittendorfer from Gordon Cheng's lab, in which they first developed the multimodal skin. So they developed the hardware to be able to do that. And then they use the to trigger local avoidance responses, reflexes to incoming objects. We are trying to position ourself in the middle. Let's say, we are not trying to create a perfect model of PeriPersonal Space from a biological perspective. But on the other side, we would like to have something that is also working, and useful for our proposals. So from now on, I will divide the presentation in two parts. The first will be about the model. So what we think will be useful for tackling the problem, on the other side I show you an application of this model, that is basically using the low-color presentation in order to trigger avoidance responses or reaching responses distributed throughout the body. So let me start with the proposed model of PeriPersonal Space. Loosely inspired by the neurophysiological findings like we discussed before, we developed this PeriPersonal Space presentation by means of access of facial receptive field that we are going out from the robot's skin. So basically, they were extending the tactile domain into nearby space. Each tactile, that is, each pair the iCub skin is composed of, will experience a set of multisensory events. So basically, you are letting the robot learn this visual-tactile sensations by taking an object and making contact on the skin part. We learn it by tactile experience we learn a sort of probability of being touched prior to contact activation when the new incoming object is presented. And we basically created this cone shape receptive field that is going from each of the taxels. And for any object that is entering this receptive field, we created we called a buffer, of the path so basically, the idea is that the orbit has some information from what was going on before they touch, the actual contact. And if the object eventually ends up touching the tactile, it will be labeled as a positive event that will enforce the probability of the event the ending of touching the taxel. If not, for example, it might be that the object enters this receptive field, and in the end, ends up touching another taxel. This will be labeled as a negative. So at the end, we will have a set of positive and negative events a taxel can learn from. This is three dimensional space because the distance is three dimensional. And we narrowed it down to a one dimensional domain, by basically, taking the norm of the distance, but also the relative position of the object and the taxel. In order for us to be able to cope with the calibration errors that were amounting up to a couple of centimeters that were significant. One dimensional variable has been discretized into a set of bins. And for each bin, we computed the probability of an event belonging to that, of being touched. So the idea is that, at 20 centimeters, the probability of being touched would be lower than a zero centimeter. This is the intuitive idea. Over this one dimensional visualization, we used a partial window interpolation technique in order to provide us with the two dimensional function that, at the end, we give up a inactivation value that is proportional with the distance of the object. So as soon as the new object will enter the receptive field, I will have the taxel fire before being contacted. We did, basically, two experiments. Initially, we did a simulation in a mock lab in order to assess the convergence on the long term learning, one-shot learning behavior, to assess if our model was able to cope with noise, with the current calibration errors. And then, we went on the real robot. We presented them with different objects. And we were basically touching the robot 100 times in order to make it learn these presentations. So, trust me, I don't want to bother you with this kind of technicalities, but we did a lot of work. This is, basically, the math of the result. So, let me go on the second part, in which the main problem was for the robot to detect the object visually. In order for us to do that, we developed a 3D tracking algorithm, that was able to track a [INAUDIBLE] object, basically. To design, we used some software that was only available in the iCub software repository. The engine provides you with some basic algorithms that you can play with. And namely, we used a two dimensional optical flow made by Carlo and a 2D particle filter and a 3D stereo vision algorithm, that is basically the same as I was showing before during the recognition game. And this basically was feeding a 3-D camera to provide the robot estimation of the position of the object. So, the idea is that the motion detector from the optical flow act as a trigger for the subsequent pipeline, in which, basically, after a consistent enough motion in this optical flow module, this would be a template to be taught in the visual in the 2D visual world by this. Then, that this information is sent to the 3D depth map and this would be feeding the camera feature in order to provide us with the table representation because, obviously, the stereo system doesn't work that good in our context. And this, if it works-- no. OK. On my laptop, it works here. OK. Now it works. OK. This is the idea. So I was basically waving, moving the object in the beginning. OK. Then when it is detected, this pattern starts. And you can see here the tracking. This is the stereo vision. This the final outcome. This was used for the learning. We did a lot of iterations of these objects that are approaching the skin on different body parts. This is the graph. I don't want to talk about that. So let me start with the video. This is basically the skin. And this is the part that it was trained before. When there is a contact, there is activation here. You can see here the activation. And soon after, this thing worked also with one example. The taxel starts firing before the contact. And obviously, this is improved over the time. And it depends on the body part that is touched. For example, if I touch here, I'm coming from the top. So the representation starts firing mainly here. And this, obviously, depends on the specific body part. Now, I think that I'm going to touch the hand. And so after a while, you will have an activation on the hand. Obviously, I will have also some activation in the forearm, because I was getting closer to the forearm. And as an application of this, this one is simply a presentation. So it's not that usable. We basically exploited it in order to develop an avoidance-- a margin of safety around the body. Let's say if the taxel is firing, I would like it to go away from the object, assuming that this can be a potentially harmful object. And on the other way, I would like it to be able to reach with any body part the object under consideration. So to this end, we developed the avoidance and catching controller that was able to leverage on this distributed information and perform a sensor-based guidance of the model actions by means of this visual tactile associations. And this is basically how it works. So this is at the testing stage. So I already learned the representation. As soon as I get closer, the taxel starts firing, because of the probabilities I was learning. And the arm goes away. Obviously, the movement depends on the specific skin part that has been touched. If I'm touching here, the object will go away from here. If I'm coming from the top-- I think this one was doing from the top, yes-- the object will go away from the back. The object will be going a way from another direction. And the idea here is not to, basically, tackle the problem from a classical robotics approach. But the basic idea-- this behavior emerges from the learning. And the idea was very simple. We were basically looking at the taxel that we were firing. If they were firing enough, then we were recording their position. And we were doing, basically, a population coding that is a weighted average according to the activation and the prediction. We did that to both for the position of the taxel and the normal. So at the end if you have a bunch of tactiles here, we will end up with one point to go away from. And on the other side, the catching, the reaching, was basically the same, but in the opposite direction. So if I want to avoid, I do this. If I want to catch, I do this. Obviously, if you do it in the hand, this would be a standard robotic reaching. But this actually can be triggered also in different body parts. As you can see here, I get a virtual activation, and then the physical contact. And yes, basically, our design was to use the same controller for both of the behaviors. OK. This is also some technicalities that I don't want to show you. So in conclusion, the detector presented here is, to our knowledge, the first attempt at creating a decentralized, multisensory visual tactile representation for a robot and its nearby space by means of the distributed skin and interaction with the environment. One of the assets of our representation is that learning is fast. As you were seeing, it can learn, also, from one single example. It's in parallel for the whole body in the sense that every tactile learns independently. Its own representation is incremental in a sense that it converges toward a stable representation over the time. And importantly, it is adapted from experience. So basically, it can automatically compensate for errors in the model that, for humanoid robots, is one of the main problems when merging different modalities OK. Thank you. If you have any question, feel free to ask. RAFFAELLO CAMORIANO: I am Raffaello. And today, I'll talk to you about a little bit of my work on machine learning and robotics, in particular some subsets of machine learning which are the large scale learning and incremental learning. But what do we expect from our modern robot? And how can machine learning help out with this? Well, we expect modern robots to work in, particularly, unstructured environments which they have never seen before and to learn new tasks on the fly depending on the particular needs throughout the operation of the robot itself and across different modalities. For instance-- vision, of course, but also tactile sensing which is available on the iCub also proprioceptive sensing, including force sensing, [INAUDIBLE] and so on and so forth. And we want to do all of this throughout a very long time span potentially. Because we expect robots to be companions of humans in the real world operating for maybe years or more. And this poses a lot of challenges, especially from the computational point of view. And machine learning can actually help with this tackling these challenges. For instance, there are large scale learning methods, which are algorithms which can work with very large scale datasets. For instance, if we have millions of points gathered by the robot cameras throughout 10 days and we want to process them, well, if we use standard machine learning methods, that will be a very difficult problem to solve if we don't use, for instance, randomizing methods and so on and so forth. Machine learning also has incremental algorithms, which can allow the learned model to be updated as new previously unseen features are presented to the agent. And also, there is a subfield of transfer learning which allows knowledge learned for a particular task to be used for serving another related task without the need for seeing many new examples for the new task. So my main research focuses are in machine learning. I work especially in large scale learning methods, incremental learning, and in the design of algorithms which allow for computational and accuracy trade-offs. I will explain this a bit more later. And as concerns robotic applications, I work with Guilia, Carlo and others on incremental object recognition, so in a setting in which the robot is presented new objects throughout a long time span. And it has to learn them on the fly. And also, I'm working in a system identification setting, which I will explain later, related to the motion of the robot. So this is one of the works which has occupied my last year. And it is related to large scale learning. So if we consider that we may have a very large n, which is a number of examples we have access to, in the setting of kernel methods, we may have to store a huge matrix, the matrix K, which is n by n, which could be simply impossible to store. So there are randomized methods, like the Nystrom method, which enable to compute a low rank approximation of the kernel metrics simply by throwing a few points m at random, a few samples at random, and building the metrics K and m, which is just much smaller. Because m is much smaller than n. And this is a well-known method in machine learning. But we tried to see it from a different point of view than usual. Usually, this is seen just from a computational point of view in order to fit a difficult problem inside computers with limited capabilities while we proposed to see the Nystrom approximation as regularization of operation itself. So if you can see this, the usual way in which the Nystrom method is applied, for instance, with kernel regularized least squares. The parameter m, so the number of examples we are taking at random, is usually taken as large as possible in order just to fit in the memory of the available machines. While, actually, after choosing a large m, it is often necessary to regularize with deep neural regularization, for instance. And this sounds a bit like a waste of time and memory. Because, actually, what regularization, roughly speaking, does is to discard the irrelevant eigen components of the kernel metrics. So we observe that we can do this by just less random examples, so having a smaller model which can be computed more efficiently and without having to regularize again later. So m, the number of examples which are used, controls both the regularization and the computational complexity of our algorithm. This is very useful in a robotic setting in which we have to deal with lots of data. As regards to the incremental objects recognition task, this is another project I'm working on. And imagine that the robot has to work in an unknown environment, and it is presented novel objects on the fly. And it has to update its object recognition model in an efficient way without retraining from scratch every time a new object arrives. So this can be done easily by a slight modification of the regularized least squares algorithm and proper reweighting. An open question is how to change the regularization as n grows. Because we didn't find yet a way to efficiently update regularization parameter in this case. So we are still working on this. The last project I'll talk about is more related to, let's see, physics and motion. So we have an arbitrary limb of the robot, for instance, the arm. And our task is to learn a model which can provide an interesting dynamics model. So it can predict the inner forces of the arm during motion. This is useful, for instance, in a contact detection setting. So when the sensor readings are different from the group predicted one, that means that there may be a contact. Or for external force estimation or, for example, for the identification of the mass of a manipulated object. So we have some challenged for this project. We have to devise a model which could be interpretable, so in which the rigid body dynamic parameter would be understandable and intelligible for controlled purposes. And we wanted this model to be more accurate than standard multibody dynamics, rigid body dynamics model. And also, we want to adapt to changing conditions throughout time. For instance, during the operation of the robot, after one hour, the changes in temperature determine a change also in the dynamic properties of the mechanical properties of the arm. And we want to accommodate for this in an incremental way. So this is what we did. We implemented a semi-parametric model which the first part which has priority is a simple incremental parametric model. And then we used random features for building non-parametric incremental model which can be updated in an efficient way. And we shown with this real experiment that the semi-parametric model worked as well as the non-parametric one. But it's faster to converge, because it has an initial knowledge about the physics of the arm. And it is also better than the fully parametric one, because it also models, for example, dynamical effect due to deflectability of the body. And dynamic deflectors are usually not modeled by rigid body dynamic models. OK. Another thing I'm doing is maintaining the Grand Unified Regularized Least Squares library, which is a library for regularized least squares, of course. It supports a large scale dataset. This was developed in joint exchange between MIT and IIT some years ago by others, not by me. And it has a MATLAB and a C++ interface. If you want to have a look at how these methods work, I suggest you to try out tutorials which are available on GitHub. GUILIA PASQUALE: I'm Guilia. And I work on the iCub robot with my colleagues, especially on vision and, in particular, on visual recognition. I work under the supervision of Lorenzo Natale and Lorenzo Rosasco. Both will be here for a few days in the following weeks. And the work that I'm going to present has been done in collaboration with Carlo and also Francesca Odone from the University of Genoa. So in the last couple of years, computer vision methods based on deep convolution on neural networks have achieved a remarkable performance in tasks such as large scale image classification and retrieval. And the extreme success of these methods is mainly due to the increasing availability of all these larger datasets. And in particular, I'm referring to the ImageNet one, which is composed by millions of examples labeled into thousands of categories through crowd sourcing methods such as the Amazon Turk. And in particular, the increased data availability to gather with the increased computational power has allowed to train deep networks characterized by millions of parameters in a supervised way from the image up to the final label through the back propagation algorithm. And this has allowed to mark a breakthrough-- in particular, in 2012 when Alex Krizhevsky proposed for the first time of a network of this kind trained on ImageNet dataset and definitely won the ImageNet large scale user recognition challenge in this way. And the trend has been confirmed in the following years. So that nowadays problems such as large scale image classification or detection are usually tackled following this deep learning approach. And not only, it has been also demonstrated at least empirically. Oh, I'm sorry. Maybe this is not particularly clear. But this is the Krizhevsky Network. Models of networks of this kind trained on large datasets, such as the ImageNet one, do provide also very good general and powerful image descriptors to be applied also on other tasks and datasets. In particular, it is possible to use a convolutional neural network trained on ImageNet dataset, feed it with images, and using it as a black box extracting the vectorial representation of the incoming images as the output of one of the intermediate layers. Or even better, it is possible to start from a network model trained on the ImageNet dataset and fine tune its parameters on a new dataset for a new task and achieving and surpassing the state of the art-- for example, also in the Pascal dataset and other tasks-- following this approach. So it is natural to ask at this point, why? Instead, in robotics, providing robots with robust and accurate visual recognition capabilities in the real world is still one of the greatest challenge that prevents the use of autonomous agents for concrete applications. An actually, this is a problem that is not only related to the iCub platform, but it is also a limiting factor that the performance of the latest robotics platforms, such as the ones that have been participating, for example, to the DARPA robotics challenge. Indeed, as you can see here, robots are still either highly tele-operated or complex methods. To, for example, map the 3D structure on the environment and label it a priori must be implemented in order to enable autonomous agents to act in very controlled environments. So we decided to focus on very simple settings where, in principle, computer vision methods as the ones that I've been describing you should be at least-- well, should provide very good performances. Because here the setting is pretty simple. And we tried to evaluate the performance of these deep learning methods in these settings. Here you can see the robot, that one, standing in front of a table. There is a human which gives verbal instruction to the robot and also, for example in this case, the label of the object to be either learned or recognized. And the robot can focus his attention on potential objects through bottom up segmentation techniques-- for example, in this case, color or the other saliency-based segmentation methods. I'm not going into the detail of this setting, because you would see a demo after my talk of this. Another setting that we are considering is similar to the previous one. But this time, there is a human standing in front of the robot. And there is no table. And the human, he's holding the objects in his hands and is showing one object after the other to the robot providing the verbal annotation for that object. The robot in this way, for example here, can exploit motion detection techniques in order to localize the object in the visual field and focus on it. The robot tracks the object continuously, acquiring in this way cropped the frames around the object that are the training examples that will be used to learn the object's appearance. So in general, this is the recognition pipeline that is implemented to perform both the two behaviors that I've been showing you. As you can see, the input is the image, the stream of images from one of the two cameras. Then there is the verbal supervision of the teacher. Then there are segmentation techniques in order to crop region of interest from the incoming frame and feed this crop to a convolutional neural network. In this case, we are using the famous Krizhevsky model. Then we encode each incoming crop in a vector as the output of one of the latest layers of the network. And we feed all these vectors to a linear classifier, which is linear because, in principle, the representation that we are extracting is good enough for the discrimination that we want to perform. And so the classifier uses these incoming vectors either as examples for the training sector or assigns to each vector the prediction label. And the output is an histogram with the probabilities of all the classes. And the final outcome is the one with the highest probability. And the histogram is updated in real time. So this pipeline can be used either for one or the other settings that have been described you. So in particular, we started from trying to list some requirements that according to us are fundamental in order to implement a sort of ideal robotic visual recognition system. And these requirements are usually not considered by typical computer vision methods as the ones that have been described you, but are the same fundamental if we want to achieve human level performances in the settings that I've been showing you. For example, first of all, the system should be, as you have seen, as much as possible self-supervised, meaning that there must be techniques in order to focus that robot's attention on the object of interest and isolate them from the visual field. Then hopefully, we would like to come out with a system that is reliable and robust to the variations in the environment and also in the object's appearance. Then also, as we are in the real world, we would like a system able to exploit the contextual information that is available. For example-- the fact that we are actually dealing with videos. So the frames are temporarily correlated. And we are not dealing with images in the wild, as the ImageNet case. And finally, as Raffaello was mentioning, we would like to have a system that is able to learn incrementally to build always richer models of the object through time. So we decided to evaluate this recognition pipeline according to the criteria that have been described you. And in order to provide reproducibility to our study, we decided to acquire a dataset on which to perform our analysis. However, we would like also to be confident enough that the result that we obtain on our benchmark will hold also in the real usage of our system. And this is the reason why we decided to acquire our dataset in the same application setting where the robot usually operates. So this is the iCubWork28 dataset that I acquired last year. As you can see, it's composed by 28 objects divided into seven categories and four instances per category. And I acquired it for four different days in order to test also incremental learning capabilities. The dataset is available on the IIT website. And you can also use it, for example, for the project of trust five if you are interested. And this is an example of the kind of videos that I acquired considering one of the 28 objects. There are four videos for the train, four for the test, acquired in four different conditions. The object is undergoing random transformations, mainly limited to 3D rotations. And as you can see, the difference between the days is mainly limited to the fact that we are just changing the conditions in the environment-- for example, the background or the lighting conditions. And we acquired eight videos for each of the 28 objects that I show you. So first of all, we tried to find a measure, as I was saying before, to quantify the confidence with which we can expect that the results and the performance that we observe on these benchmarks will also hold in the real usage of the system. And to do this, first of all, we focused only on object identification for the moment. So the task is to discriminate the specific instances of objects among the pool of 28. And we decided to estimate for an increasing number of objects to be discriminated from 2 to 28 the empirical probability distribution of the identification accuracy that we can observe statistically for a fixed number of objects. That is depicted here in the form of box plots. And also, we estimated for each fixed number of objects to be discriminated the minimum accuracy that we can expect to achieve with increasing confidence levels. And this is a sort of data sheet. The idea is to provide an idea to an hypothetical user of the robot of the identification accuracy that can be expected given a certain pool of objects to be discriminated. So the second point that I'll briefly describe you is the fact that we investigated the effect of having a more or less precise segmentation in the image. So we evaluated the task of identifying the 28 objects with different levels of segmentation starting from the whole image up to a very precise amount of segmentation of the objects. It can be seen that, indeed, even if in principle these convolutional networks are trained to classify objects in the world image as it is in the ImageNet dataset, it is still true that in our case we observed that there is still a large benefit from having a fine-grained segmentation. So probably the network is not able to completely discard the new relevant information that is in the background. So this is a possible interesting direction of research. And finally, the last point that I decided to tell you-- I will skip on the incremental part, because it's an ongoing work that I'm doing with Raffaello-- is about the exploitation of the temporal contextual information. Here, you can see the same kind of plot that I showed you before. So the task is object identification, increasing number of objects. And the dot black line represent the accuracy that you obtain if you consider, as you were asking before, the classification of each frame independently. So you can see that in this case the accuracy that you get is pretty low, considering that we have to discriminate between only 28 objects. However, it is also true that as soon as you start considering instead of the prediction given looking only at the current frame, the most frequent prediction occurred in a temporal window, so in the previous, let's say, 50 frames. You can boost your condition accuracy a lot. As you can see here, from green to red, increasing the length of the temporal window increases the recognition accuracy that you get. This is a very simple approach. But it is showing that actually it is relevant in the fact that you are actually dealing with videos instead of images in the wild. And it is another direction of research. So finally, in the last part of my talk, I would like to tell you about the work that I'm actually doing now, which is concerning about most object categorization tasks instead of identification. And this is the reason why we decided to acquire a new dataset, which is larger than the previous ones. Because it is composed not only by more categories, but, in particular, by more instances per category in order to be able to perform categorization experiments as I told you. Here, you can see the categories with which we are starting are 28 divided into seven macro categories, let's say. But the idea of this dataset is to have a continuously expandable in time dataset. So there is an application that we used to acquire these datasets. And the idea is to perform periodical acquisitions in order to incrementally enrich the knowledge of the robot about the objects in the scene. Also, another important factor regarding this dataset is that differently from the previous one this would be divided and tagged by nuisance factors. And in particular, for each object, we are acquiring different videos where we isolate the different transformations that the object is undergoing. So we have a video where the object is just at different scales. Then it is rotating on the plane. Outside the plane, it is translating. And then there is a final video where all of this transformation occurs simultaneously. And finally, to acquire this dataset we decided to use the depth information, so that in the end we acquired both the left and the right to cameras. And in principle, this information could be used to obtain the 3D structure of the objects. And this is the idea that we used in order to make the robot focusing on the object of interest using disparity. Disparity is very useful in this case, because it allows to detect unknown objects just given the fact that we know that we want the robot focused on the closest objects in the scene. So it is a very powerful method in order to have the robot tracking a known object with all different lighting conditions and so on. Yeah. And here, you can see this is the left camera. This is the disparity map. This is its segmentation, which provides an approximate region of interest around the object. And this is the final output. So I started acquiring the first-- well, it should be red, but it's not very clear I mean. I started acquiring the first categories among these 21 listed here, which are the squeezer, sprayer, the cream, the oven glove, and the bottle. For each row, you see the tiny instances that I collected. And the idea is to continue acquiring them when I come back in Genoa. And so here, you can see an example of the five videos. Actually, I acquired 10 videos per object, five for the training set and five for the test set. And you can see that in the different videos the object is undergoing different transformations. And this is the final one while these transformation are mixed. Oh, the images here are not segmented yet. So you can see the whole image. But in end, also the information about the segmentation and disparity and so on will be available. And this dataset regarding the 50 object that I acquired together with the application that I'm using to acquire the dataset are available if you are willing to use them for the projects in trust five, for example, in order to investigate the invariant properties of the different representations. And so that's it. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_13_James_DiCarlo_Neural_Mechanisms_of_Recognition_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JAMES DICARLO: So let me start by first-- I already alluded to this, but let's talk about the problem of vision. This is just one computational challenge that our brains solve, but it's one that many of us are very fascinated by. As you'll hear in the rest of the course, there are other problems that are equally fascinating. But I'm going to talk about problems of vision. I'm going to talk about a specific problem of vision, and that's the problem of object recognition. So I will try to operationalize that for you. And one thing you'll see when I talk is that our field, even though we can be motivated by words like vision and object recognition, we're going to only make progress if we start to operationally define things and then decide in what domain models are going to apply. And I think that's an important lesson that I hope will come across in my talk. So this is the way computer vision operationally defines part of the problem of object recognition and vision. It's as if you take a scene like this and you want to do things like come up with an answer space that looks like this, where you have noun labels, say a car. And you have what are called bounding boxes around the cars, similarly for people, or buildings, or trees, or whatever nouns that you or DARPA or whoever wants to actually label. Right, so this is just one way of operationalizing vision. But I think it gets at the crux of what we're after, which is, there is what's called latent content in this image that all of us instantly bring to our memories, that we can say, aha, that's a car, that's a building. There are nouns that pop into our heads. We also know other latent information about these things, like the pose of this car, the position of the car, the size of the car. The key point that I'm going to tell you today about this problem is that that information feels to us that it's obvious, but it's quite latent in the image-- that's implicit in the pixel representation. Those of you who have worked on this problem will understand this and those of you who haven't, I hopefully will give you some flavor for what that problem feels like. So I want to back up a bit. This is more from a cognitive science perspective, or a human brain perspective, to ask, why would we even bother worrying about this problem of object recognition? And maybe this is obvious that those of you-- and I don't need to say this, but I like to point out that we think of the representations of the tokens of what's out there in the world as being the substrates of what you might do, what's called higher level cognition, things like memory, value judgments, decisions and actions in the world. Imagine building a robot and having it try to act in the world and it doesn't even really know what's out there. So these are the sort of substrate of these kind of cognitive processes. Again, from an engineering perspective, these are processes or behaviors. This is just a short list of them that might depend on your good abilities to recognize and discriminate among different objects. I think if you look through this list, you could imagine things that would go terribly wrong if you didn't actually do a good job at identifying what's out there in the world. So that's just to think about, again, as an engineer building a robot. This is a slide I stuck in that I want to connect to this course, the idea that I know many of you are from maybe these backgrounds, or from this background. And when I think about the brain, I have this coin here to say, really these are kind of two sides-- we're studying the same coin from two directions here. And really the question that we have to all be excited about, I hope many of you are excited about it is, how does the brain work? And you could do computer science and not care at all about this question. I think it's a little harder to do these and not care about this question. But it's possible, I guess. So these are all trying to answer this question. And this is maybe pretty obvious, but when you have biological brains that are performing tasks better than current computer systems, machines that humans have built, then the flow tends to want to go this way. You discover phenomena or constraints over here. These lead to ideas that can be built into computer code that can say, hey, can I build a better machine based on what we discover over here? And many of us who came into the field excited to do this and are still excited of this kind of direction. But an equally important direction is that when you have systems that are matched with our abilities, or that can compute some of the things that we think the brain has to compute, then the flow goes more this way, where there's many possible ways to implement an idea and these become falsifiable. That is, that they can be tested against experimental data to ask which of these many ways of implementing a computation are the ones that are actually occurring in the brain. And that's important if you say you want to build brain-machine interfaces, or fix diseases, or do something that's on the level of interacting with the brain directly. I hope that you guys keep this picture in mind because I think it's sort of the spirit of the course that both of these directions are important. And it's not as if we work on this for 20 years and then work on this for 20 years. It's really the flow across them that I think is the most exciting to us. So just to connect to that, a little bit of history of where was the field on this problem of visual recognition. I don't know if many of you heard this, but here you are at summer school, so there was a Summer Vision Project-- it was called, at MIT. I used to think this story was apocryphal. In 1966, there was a project that the final goal was object identification, which we'll actually name, Objects by Matching the Vocabulary of Known Objects. So this was essentially a summer project to say, we're going to get a couple undergraduate students together and we're going to build a recognition system in 1966. And this was the excitement of AI, we can build anything that we want. And of course, those of you who know this, this problem turned out to be much, much harder than anticipated. So sometimes problems that seem easy for us are actually quite difficult. If any of you wants this, I would be happy to share this document with you. It's interesting, the space of objects that they describe things like recognizing-- of course, I would say like coffee cups on your desk. But they also say packs of cigarettes on your desk. So this sort of dates the time of this here. So it's a little bit like Mad Men or something. So now, here we are today. And I guess I just can't help but sort of get excited about, here's this really cool machine that's just amazing that does these computations. The things got-- I can't tell you all this because of the 100 billion computing elements, solves problems not solveable by any previous machine. And the thing, it looks crazy, but it only requires 20 watts of power. Those of you who have seen this slide, I'm not talking about this thing. I'm talking about that thing right there. So this is a scale of what we're after. And we often talk about power, but this is something engineers are especially interested in as they build these systems, is how does our brains solve these problems at such a low wattage, so to speak. This is, again, the spirit of many of the things that I hope that you guys are excited about in the future of this field. Here's another slide that I pulled out that I often like to show is that, from an engineer's point of view, we often try to say, well, we want to build machines that are as good or better than our brain. So machines today, you guys know this, beat us at many things, straight calculation, they beat us at chess. When I was a grad student, they recently won at Jeopardy. In memory, they've always beaten us. Machines are way better at memory than us in the simple form of memory. Seeing, in pattern matching, go to the grocery store, hey, what's that bar code done? I don't know what that was, but it just scans in and somehow it does pattern matching, right? So there's forms of vision that machines are way better than us. But some forms of vision that are more complicated that require generalization, like object recognition, or more broadly, scene understanding, we like to think that we are still the winners at. And even things that we take for granted, like walking, this is quite a challenging problem. So engineers really want to move this over here. So our goal is to discover how the brain solves object recognition. And the reason I put this up is, from an engineering point of view, that just doesn't mean write a bunch of papers in a textbook that says, this part of the brain does it, but actually help to implement a system where this is, at least, matched with us and I assume someday, will be better than us. And this is also a gateway problem. That is, even if it's just this domain, we think that the systems we're studying might generalize to other, for instance, sensory domains. Gabriel told me you were going to do an auditory, visual comparison session later in the week. That's an engineer's point of view, how do I just build better systems? Let's step back and talk from a scientist's point of view. So this is really now to introduce the talk that I'm going to give you today. So when you're a scientist, what's our job? We say we want to understand. We all write that, understand. What does that mean? Well, what it really means if you boil it down, and I would love to discuss this if you like, is that you have some measurements in some domain. So you can think of this as a state space here. This is like the position of the planets today. And this is like the position of the planets tomorrow. Or you could say, this is the DNA sequence inside a cell. And this is some protein that's going to get made. So you're searching for mappings that are predictive from one domain to another. And we can give lots of examples of what we call successful science, where that's true. This is the core of science is to predict, given some measurements or observations, what's going to happen either in the future or some other set of measurements. So predictive power is the core of all science and the core of understanding. And I think it would be fun if you want to debate that, that you think there's another way. But this is what I come to in thinking about this problem. And the reason I'm bringing this up is because the accuracy of this predictive mapping is a measure of the strength of any scientific field. And some fields are further along than others. And I would say ours is still not very far along. Our job is to bring it from a nonpredictive state to a very predictive state. And so that means building models that can be falsified and that can predict things. And you'll hear that through my talk. As Gabriel mentioned, what we try to do is build models that can predict either behavior or neural activity. And that's what we think is what progress looks like. So now let's translate this to the problem I gave you, which is the problem of vision or more generally object recognition. You could imagine, there's a domain of images. So just to slow down here, just so everybody's on the same page, each dot here might be all the pixels in this image. In this dot, all the pixels in this image. So there's a set of possible pixel-images that you could see. And we imagine that they give rise to, in the brain, some state space. Think of this as the whole brain for now, to just fix ideas, that you could imagine that this image, one you're looking at, it gives rise to some pattern of activity across your whole brain. And this image gives rise to a different pattern of activity across your whole brain. And loosely, we call this the neural representation of this thing. But then what we do is somehow when we ask you for behavior reports, there's a mapping between that neural state space and what we measure as the output. Whether you say it or write it, you might say, that's a face, these are both faces, if I asked you for nouns among them. OK, so this is another domain of measurement. So now you can see I'm setting up the notion of predictivity. And what we want to do is, we have this complex thing over here of images that somehow map internally into neural activity and then somehow map to the thing we call perceptual reports. And notice I've already put things that we call nouns that we usually associate with objects, cars, face, dogs, cats, clocks, and so forth. OK, so understanding this mapping in a predictive sense is really a summary of what our part of the field is about. And again, accurate predictivity is the core product of the science that underlies our ability to build a system like this-- many of you are interested, to fix a system like this, or to perhaps even augment our own systems. If we want to inject signals here and have them give rise to percepts, we have to know how this works. A big part of the field of vision is spent-- a lot of the last three decades, working on the mapping between images and neural activity. That's usually called encoding, predictive encoding mechanisms. And it's driven by Hubel and Wiesel's work. The people saw this as a great way forward. It's like, let's go study the neurons and try to understand what in the image is driving them. That is, what's an image computable model in the world that would go from images to neural responses? The other part is that there's some linkage, we think, between the neural activity and these reports. And notice, this is actually why most of us get into neuroscience because you notice this arrow is two-way. This is actually quite deep here. From an engineer's point of view, you go, well, there's got to be some mapping between the neural activity and the button presses on my fingers or my saying the word noun. There's some causal linkage between this and the things that we observe objectively in a subject. But this is where philosophers debate about like, well, you know in some sense these are sort of two sides of the same coin. We say our own perception, there's some aspects of the internal activity that are the thing that we call awareness or perception. Now I'm not going to get into all that, but I just want to point out that if you're just building models, you can't approach that. It's this sort of strange thing between neurons and these reported states that many of us are fascinated by. So this is called predictive decoding mechanisms. For me, it's all going to be operationalized in terms of reports from humans or animals. And I'll not do that philosophical part, but I thought I'd mention that for those you like to think about those things. So for visual object perception, I want to point out that, again, the history of the field has been mostly here. This link has been neglected or dominated by weakly predictive word models. That doesn't mean they're not useful starting points, but they're weakly predictive. And so a weakly predictive word model would be-- and for temporal cortex, a part of the brain I'm going to tell you about today, does object recognition. That model has been around for a long time. It is somewhat predictive because it says, you take that out and all object recognition will get destroyed, would be a prediction. Turns out that doesn't actually happen. We can discuss that. But it doesn't tell you how it does it, how to inject signals, which tasks are more or less affected, so that's what I mean by weakly predictive. It's a word model. Face neurons do face task, that's probably true to some extent. But again, it doesn't tell us-- it's more tight. It sort of says, oh, I'll take out these smaller regions and there'll be some set of tasks that involve faces. I don't know, I won't say anything about other tasks. So that's a somewhat more strongly predictive model, but still pretty weakly predictive. And my personal favorite that comes in from reviewers a lot is, attention solves that. So this is just a statement that-- just to be on the lookout for word models that don't actually have content in terms of prediction. I don't know what that means. I read this as, hand of God reaches in and solves the problem. So there's got to be an actual predictive model that can be falsified. OK, so I don't mean to doubt the importance of these. Before people start giving me a hard time, there are attentional phenomena, there are face neurons, there is an IT, that's what we study. I'm just trying to emphasize for you that we need to go beyond word models into actual testable models that make predictions, that would stand even if the person claiming those models is no longer around, it would make a prediction. Let me try to define a domain. I said we're going to try to define stuff. It's hard to define stuff. It's big, vision, it's a big area. Object recognition, I sort of said it vaguely. And when I say this, I include faces as an object, a socially important when. You'll hear this from Winrich I think. But I want to say, to try to limit it even further, that's still a big domain. And so we tried early on to reduce the problem even further to something that is more, again, naturalistic, that we think can give us more traction, this predictive sense. So we started by saying, when you take a scene like this and you analyze it, you may not notice it but your ventral stream, really your retina has high acuity in say the central 10 degrees. There's anatomy that I'll show you later that the ventral stream is especially interested in processing the central 10 degrees of information. So that's about two hands at arm's length, for those you see in the room. So you may have the sense that you know what's out there, but you don't really. You kind of stitch that together. And lots of people have shown this, the way you stitch this together is making rapid eye movements around, called saccades, followed by fixations, which are 200 to 500 milliseconds in duration. You don't really see during this time here. It's not as if your brain shuts down, it's just that the movement is too fast for your retina to really keep up with this. So you make these rapid eye movements, you fixate, fixate, fixate. And what you do is, that brings this sort of sampled scene to the central 10 degrees that might look something like this. So those are 200 millisecond snapshots across that scan path. And I'll play it for you one more time. Now, you should notice that there's one or more objects in each and every image that you probably said, oh, there's a sign. There's a person. There's a car. You might have gotten two out of each one. But you were sort of extracting, at least intuitively to me, at least one or more foreground or central objects when I show you those images. And that ability to do what I just showed you there, we think is the core of how you analyze or build up a scene like this, at least how the ventral stream contributes. And therefore, we call that core recognition, which I defined as a central 10 degrees of visual field, 100 to 200 millisecond viewing duration. And again, it's not all of object recognition, but we think it's a good starting point. And a way that we probably got into this is because of a rapid serial visual presentation movies from the 70's. Molly Potter showed this really nicely. This is a movie that I've been showing for 15 years now. Notice that this is just a sequence of images where there is typically one or more foreground objects. And you should be quickly mapping those to memory, even though I'm not telling you what to expect. Like Leaning Tower of Pisa, right, I'm not going to tell you that you're going to see Star Wars characters-- well, I just did. But you quickly are able to map those things to some noun or even a more precise subordinate noun. I know this is Yoda. So our ability to do that, we're very, very good at that. Notice you didn't need a lot of pre-cueing, yet you're still able to do that. And that is really what fascinates us about vision and object recognition in particular. Even without featural attention or pre-cueing, you're able to do a remarkable amount of processing. And I think that's a great demonstration of that. And just to quantify this for you, because sometimes people say, well you're showing it too short. Your vision system doesn't do much. Here's an eight-way categorization task I'll show you later under range of transformation. These are just the example images of eight different categories of objects. It doesn't really matter what I much do here, you get a very similar curve. And that is, you get most of the performance gain in about the first 100 milliseconds. This is accuracy, you're about 85% correct. This is a challenging task, as I'll show you earlier. It looks easy here, but it's quite challenging. 85% correct, if I let you look at the image longer, up to two seconds, you can bump up to around 90's. So there is some gain with longer viewing duration, but you get-- chance is 50, so you get this huge ability. And we're not the first to show this. This is just to show you in our own kind of task that the data I'm going to tell you about, where we show the image for 100 or 200 milliseconds, this is the typical primate viewing duration that I pin this on. We use this for reasons of efficiency. But you see, the performance is similar across that time. You get a lot done. Your visual system does a lot of work in that first glimpse. And that's core recognition that we are trying to study here. And I know it's not all of object recognition or all of vision, but it's now, we think, a much more defined domain that we can make progress on. And that's what we've been working on. And that's essentially what I'm going to talk about today. So think of vision, object recognition, within that core recognition. This is David Marr. David and Tommy Poggio, I studied with a long time. And Tommy wrote the introduction to David's-- if you guys haven't read this book, Vision-- has anybody, guys know this book? It's really a classic book in our field. It's the first couple chapters that are the part you should really read. That's the best part of the book. And one of the things that you take from this book, that I think David and Tommy helped to lay out a long time ago, is that there is this challenge of level. I think one of the things I take from this is, they would try to define three clean levels. It turns out not to be this clean in practice. But there's one level called computational theory, what's the goal, what's appropriate, what's the logic, and by what strategy can it be carried out. There's another level which is, OK, now once you decide that, how should you represent the data? How can you implement an algorithm to do it? And then there's this actually, how do you run it, how do you build it in hardware? And neuroscientists often come in, they're like, I'm going to study neurons and it's sort of like jumping into your iPhone and saying, I'm going to study transistors. They often tend to start at the hardware level. And I think that's the biggest lesson you take from this like, oh wait, there's something going on here, these transistors are flying. And you make some story about it if you were recording from the brain or measuring transistors in my iPhone. But I think the important point to take from this is it helps to start thinking about what's the point of the system. What might it be doing? How might you solve that problem? And that leads you then to algorithm. And then you think about representations. So it's sort of a top down approach, rather than just digging into the brain and hoping that the answers will emerge. So I'm going to try to give you that top down approach in this problem that I'm talking about. I've already given you a bit of it by introducing you to the problem. I'll say a little bit more about that and step down a little bit this way. And so this kind of thinking, I think, is important to making progress in how the brain computes things. So here's a related slide that I made a long time ago that, again, I pulled out for you guys, that I think helps bridge between what I just said about the Marr levels of analysis and whether you're a neuroscientist or cognitive scientist, and are a computer vision or machine learning person. So the first is, what is the problem we're trying to solve? So that's Marr computational level one. So computational vision-- now operationally, you'll hear folks in machine learning, they might say, well, there's some benchmarks, that's good. There's a ImageNet Challenge or whatever challenge they want to solve. Sometimes they'll say, well the brain solves it. That's not good because they didn't really define the problem. Neuroscientists will say, well, it's something like perception or behavior or there's some sort of behavior that they imagined, although characterizing that behavior is not usually their primary goal. But I think there is at least some progress in that regard. Now what does a solution look like? This is really just to talk about language. So useful image representations for machine learning, like what we might call features-- but neuroscientists will talk about explicit neuronal spiking populations. You heard this in Haim's talk. He was using these words interchangeably. Again, this may be obvious to you guys, but I thought it's worth going through. So this is like Marr level two, representation. How do we instantiate these solutions? So this is still level two algorithms, or mechanisms that actually build useful feature representations. Neuroscientists will think about neuronal wiring and weighting patterns that are actually executing those algorithms. This is what we think is a bridging language there. And then there's this deeper level that came up in the questions, which is, how would you construct it from the beginning? Learning rules, initial conditions, training images, are words that are used here. There is a learning machine. Here, neuroscientists talk about plasticity, architecture, and experience. But again, those are similar questions just with different language. And I'm doing this because I think the spirit of this course is to try to build these links at all these different levels here. OK, so hopefully that kind of helps orient you to how we think about it. Let me just go and say, I want to talk about number one. What is a problem we're trying to solve and why is it hard? I said, object recognition is hard and I showed you that MIT Challenge and it was difficult. Maybe it's hard because there's lots of objects. Who thinks that's why it's hard? Who thinks that's not why it's hard? You think computers can list a bunch of objects? It's easy, right? This is what I said about memory, it's a big long list of stuff. Computers are good at that. There's going to be thousands of objects. A list of objects is not a hard thing for a machine to do. What's hard is that each object can produce an essentially infinite number of images. And so you somehow have to be able to take some samples of certain views or poses of an object, this is a car under different poses, and be able to generalize or to predict what the car might look like in another view. This is what's called the invariance problem. and it's due to the fact that, again, there's identity preserving image variation. This is why the bar code reader in your supermarket works fine, because the code is always laid out very simply. But when you have to be able to generalize across a bunch of conditions, potentially things like background clutter, even more severely occlusion, things you heard from Gabriel, or you may even want to generalize across the class of cars where the cars have slightly different geometry but they're still cars, these kind of generalizations are what make the problem hard. So I'm lumping them all together in what we call the invariance problem. Many of you in the room know this is the hard problem. And I think that hopefully it fixes ideas of, that's what you should think about. It's not the number of objects, but it's the fact that it has to deal with that invariance problem. Haim was talking about manifolds, and this is my version of that. So this is to introduce you to the problem of, why that invariance problem-- what it looks like or feels like. I'm not going to give you math on how to solve it. It's just a geometric feel for the problem. So if you imagine you're a camera-- or your retina, which is capturing an image of an object, let's call this a person, I think I called him Joe. So when you see this image of Joe, and this is the retina, so now this is a state space of what's going on in your retina. So it's a million retinal ganglion cells. Think of them as being an analog value out of each, so this is a million dimensional state space. So when you see this image of Joe, he activates every retinal ganglion cell, some a lot, some a little, but he's some point of that million dimensional space. OK, everybody with me? If everybody's heard all this before and wants me to go on, everybody wave your hand and I'll move on. AUDIENCE: No, it's good. JAMES DICARLO: Keep going, OK. So the basic idea is that if Joe undergoes a transformation, like a change in pose, what that does is, it's only a 1 degree of freedom I'm turning under the hood one of those latent variables. If I had a graphics engine, I'm changing the pose of latent variables. It's only one knob that I'm turning, so to speak. And that means there's one line through here as Joe projects across these different images here. And I'm ignoring noise and things. This is just the deterministic mapping onto the retinal ganglion cells. So Joe goes-- [MOVING NOISE] --and he goes over here. And if I turn the other knob, he goes over here. And so I could imagine, if I turned those two knobs of two axis opposed always possible and plotted this in the million dimensional state space, there'd be this curved up sheet of points, which you could think of Joe's identity manifold over those two degrees of view change. It's only two dimensions, it's hard to start showing more than this. But it's this curved up sheet of points. Everybody with me so far? You don't actually get to see all those. You could imagine a machine actually running them all, but you don't really get to see them. You've got to get samples of them. But there's some underlying manifold structure here. Now, what's interesting and what's important to point out is that this thing, even though I've drawn it and it's a little curve, but it's highly complicated in this native pixel space. It's all curved up and bending all over the place. And the reason that matters, and this is what Haim introduced you to, is that if you want to be able to separate Joe from another object, say not Joe, another person say, then you need a representation. I showed you retinal ganglion cells. This is another imaginary state space where you can take simple tools to extract the information. And the simple tools that we like to use are linear classifiers. But you can use other simple tools. Haim used the exact same description to you guys in his talk, that you have some linear decoder on the state space that can say, oh, they can separate cleanly Joe from not Joe. So these manifolds are nicely separated by a separating hyperplane. That's what these tools tend to do is they like to cut planes. This is one thing they like to do, or they want to find locations or regions, like compact regions in this space, depending on what kind of tool you use. But you don't want the tool having to do all kinds of complicated tracing through this space. That's basically the original problem itself. So what you need is, you have a simple tool box, which we think of as downstream neurons. So a linear classifier, as an approximation, it's like a dot product. It's a weighted sum, which is what we think, neuroscientists, of downstream neurons doing. So it's a weighted sum. And if we want an explicit representation in some neural state space, then we need to be able to take weighted sums of some population representation to be able to separate Joe from not Joe, and Sam from Jill, and everything from everything else that we want to separate. If we had such a space of neural population, we'd call that a good set of features or an explicit representation of object shape. And for any aficionados here, it's not just cleanly linear separation, it's actually being able to find this with a low number of training examples. So that turns out to be important. But it helps to fix ideas to think about linear separation, ideally with a low number of training examples. So that's a good representation. And notice, I'm starting to mix up terms here. I am assuming, when I talk about shape, that that will map cleanly to identity, or what you might call broadly, category. That's another topic I won't talk about, if you just think about the shape of Joe, or separating one geometry from another. Now, here's a simulation that my first graduate student, Dave Cox, who's now at Harvard, did. This is a number of years old. This takes these two face objects, render them under changes, and view. And then he actually simulated the manifolds in a 14,000 dimensional space. And then he wanted to visualize it. And because we wanted to try to make the point that these manifolds of these two objects are highly curved and highly tangled, this is a three dimensional view. Remember, it's sitting on a 14,000 dimensional simulation space. You can't view that space. This is a three dimensional view of it. And the point is that it's like two sheets of paper being all crumpled up together and they're not fused. They look fused here because it's in three dimensions. But they're not actually fused. But they're complicated, you can't easily find a separating hyperplane to separate these two objects. We call these tangled object manifolds. And really, they're tangled due to image variation. Remember, if I didn't change those knobs of view or position or scale, there would just be two points in the space and it would be easy. That's the easy problem of listing objects. But if they have to undergo all this transformation, they become these complicated structures that need to be untangled from each other. So the problem that's being solved is, you have this retina sampling data, like a camera on the front end, where things look complicated with respect to the latent variables, in this case shape or identity, Sam or Joe. And that they somehow are transformed, as Haim mentioned, they're transformed by some non-linear transformation, some other neural population state space, shown here, where the things look more like this. The latent variable structure is more explicit, that you can easily take things like separating hyperplanes to identify things like shape, which again, roughly corresponds to identity or other latent parameters, like position and scale. You maybe haven't thrown away all these other latent parameters. And if I have time, I'll say something about that so you don't just get identity. But if you can untangle this, you would have a very nice representation with regard to those originally latent parameters. That's the dream of what you'd like to do. It's like reverse graphics, if you will. So this is what we call an untangled explicit object information. And we think it lives somewhere in the brain, at least to some degree. And I'll show you the evidence for that later on. So what you have then is you have a poor encoding basis, the pixel space. And somewhere in the brain is a powerful encoding basis, a good set of features. And as Haim mentioned, as I already said, this must be a non-linear transformation because the linear transformations are just rotations of that original space. So now let's go down to-- actually this would be Marr level three. Let's go to instantiation. Let's get into the hardware here. We're supposed to be talking about brains. So I'm going to give you a tour of the ventral stream. So we would love to know how this brain solves it. This is the human brain. This is a non-human primate. This is not shown to scale. This is blown up to show you it's a similar structure, temporal lobe, frontal lobes, occipital lobe. There is a non-human primate. We like this model for a number of reasons. One reason that we like it is that they are very visual creatures, their acuity is very well matched to ours. In fact, even their object recognition abilities are actually quite similar to our own. This may be surprising to you, but let me just show you some data for that. This is actually data from Rishi Rajalingham, in my lab. It says, impressed, but this just came out. This is the confusion matrix patterns of humans trying to discriminate different objects under those transformations that I showed you earlier, where they're not just seeing images, but they have to deal with these invariances. And this is rhesus monkey data trying to do the same thing. And the task goes, I'll give you a test image and then you get choice images. Was it a car or a dog? I'll show you an image, what choice was it, a dog or a tree? And you're trying to entertain many objects all at once, and you get an image under some unpredictable view and unpredictable background, and then you have to make a choice. So this is the confusion difficulty. And when you look at this, it's intuitive that these are sort of geometry similar. Camel is confused with dog, and tank is confused with truck, and that's true of both monkeys and humans. And to some level, this shouldn't be surprising to you. The same tasks that are difficult for humans are difficult for monkeys because probably they share very similar processing structures. They don't have to bring in a bunch of knowledge about tanks are driven by people or that, they just have to say, was there a tank or a truck. And under those conditions, they make very similar patterns of confusion. And these patterns are very different from those that you get when you run classifiers on pixels or low level visual simulations. But they're very similar to each other, in fact, are statistically indistinguishable, monkeys and humans, on these kind of patterns of confusion. OK, so that's one reason we like this subject, the monkey model, is that the behavior is very well matched to the humans. The other reason is that we know from a lot of previous work that I alluded to, that some studies have shown that lesions in these parts of the brain can lead to deficits in recognition task. So again, we think the ventral stream solves recognition. So we know a weak word model of where to look, we just don't know exactly what's going on there. Just to orient you, these ventral areas, V1, V2, V4, and infer temporal cortex, or IT cortex-- IT projects anatomically to the frontal lobe to regions involved in decision and action, and around the bend to the medial temporal lobe to regions involved in formation of long-term memory. Because these are monkeys and not humans, and Gabriel mentioned this in his talk, we can go in and we can record from their brains, and we can perturb neural activity in their brains directly. And we can do that in a systematic way. This is the advantage of an animal model as opposed to a human model. OK, as neuroscientists now, we've taken a problem, translated it to behavior, taken that behavior into a species we can study, we know roughly where to look, and now we want to try to understand what's going on. So as engineers, we take these curled up sheets of cortex and think of them as I've already been showing you, as populations of neurons. So there's millions of neurons on each of these sheets. I'll give you numbers on a slide coming up. There's some sort of processing that may be common here, I put these T's in, there might be some common cortical algorithm processing forward this way. There's also inter-cortical processing. And there's also some feedback processing going on in here. So all that's schematically illustrated in this slide that I'll keep bringing up here when we talk about these different levels of the ventral stream. Now I'm most going to be talking about IT cortex here at the end. Why do we call these different areas? One reason is that there's a complete retina topic map, a map of the whole visual space in each of these different levels. In retina, there's one. In LGN-- in the thalamus, there's another. In V1, there's another map. In V2, there's another map. In V4, there's another map. In IT, it's less clear that it's retinotopic, we're not even sure that IT is one area. Maybe we'll have time, I'll say more about that detail. So it's not that retinotopic in IT, except the most posterior parts of IT. But that's why neuroscientists divide these into different areas. So a key concept, though, for you computationally is, think of each of these as a population representation that's retransforming the data from that complicated space to some nicer space. And it's doing this probably in a stepwise, gradual manner. So IT is believed to be that powerful encoding basis that I alluded to earlier, where you have these nice flattened object manifolds. And I'll show you the evidence for that. This is recently from a review I did that gives more numbers on these things. And I've sized the areas according to their relative cortical area in the monkey. Here's V1, V2, V4, IT. IT is a complex of areas. And I'm showing you these latencies. These are the average latencies in these different visual areas. You can see, it's about 50 milliseconds from when an image hits the retina until you get activity in V1. 60 in V2, 70-- there's about a 10 millisecond step across these different areas. So it's about 100 millisecond lag between an image it's here, and you start to see changes in activity at this level up here that I'm referring to. When I say IT, I'm referring to AIT and CIT together. That's my usage of the word IT for the aficionados in the room. And that's about 10 million output neurons in IT just to fix numbers. In V1 here, you have like 37 million output neurons. There's about 200 million neurons in V1, similar in V2. And many of you probably heard about other parts of the visual system. Here's MT, many of you probably heard about MT. So you can see it's tiny compared to some of these areas that I'm talking about here. I'm going to show you some neural dam-- I'm just going to give you a brief tour of these different areas, so brief, it's almost cartoonish. But at least those of you who haven't seen this should at least be exposed. So in the retina-- you guys know in the retina there's a bunch of cell layers in the retina. The retina is a complicated device. I think of it as a beautiful camera. So you're down in the retina. To me, the key thing in the retina is in the end you've got some cells that are going to project back along the optic nerve. So these are the retinal ganglion cells, they actually live on the surface. The light comes through, photo receptors are here, there is processing in these intermediate layers, and then there's a bunch of retinal ganglion cell types. There's thought to be about 20 types or so. The original physiology, there are two functional central types where they have on center or off center. Let's take an on center cell, you shine light in the middle of a spot-- now this is a tiny little spot on the retina, the size depends on where you are in the visual field. But you shine a little bit of light in the center, the response goes up. See the spike rate going up here. Put light in the surround, the response rate goes down. So it has an on center, off surround profile. And then there's a flip type here. So that's the basic functional type. When you think about the retina, it is tiled with all of these point detectors that have some nice center surround effects. There's some nice gain control for overall illumination conditions. But my toy model of the retina, it's basically a really nice pixel map coming back down the optic track to the LGN. OK, I'm going to skip the LGN and go straight to V1. People have known for a long time, functionally V1 cells they have sensitivity to especially edges. They have what's called orientation selectivity. Hopefully this isn't new to you guys. Here's a simple cell in V1. If you shine a bar of a light on it inside it's receptive field-- does everyone know what a receptive field is? I don't want to go-- OK. It's OK if you ask, because I want to make sure you guys are OK. So the receptive field, you shine a bar light in it, turn it on in the right orientation, gives good response out of the cell. Move it off this position, now not much response, there's a little bit of an off response here. Change the orientation, nothing happens. Full field illumination, nothing happens. OK, so this is called selectivity. That is, there's some portion of the image space that it cares about. It doesn't just respond to any light at that spot like the pixel wise, retinal ganglion cell would. So now there's this complex cell that's also in V1, which maintains this orientation selectivity across a change in position, as shown here, also across some changes in scale. So it maintains it, meaning that you have this tolerance-- so that's called position tolerance, for position. You can move the bar around it, still likes that oriented bar. But you change its angle and it goes down, so it still maintains the same selectivity here but it has some tolerance. So you get this build up of some orientation sensitivity followed by some tolerance. And there are models from Hubel and Wiesel that they thought that you could build this first and then you build these out of these, that's the simple version. And here they are. These are the Hubel and Wiesel models, how you build these and like operators to build selectivity from pixel-wise cells with an and like operator lining these up correctly. You can imagine oriented tuned cells built this way. There's evidence for this in physiology that this is how these are constructed. The tolerance of these complex cells is thought to build by a combination of simple cells. And there's some evidence for this. And this is again, all the way from Hubel and Wiesel, who won a Nobel Prize for this and related work in the 1960s. And then there were a bunch of computational models that are really inspired by this and I think are still the core models of how the system works. And some of the original ones that were written down are Fukushima in the '80s, and then Tommy Poggio and others built what's called an HMAX Model, you guys have probably heard about, that's off of these similar ideas, much more refined and much more matched to the neural data. But I'm just try to point out that these kind of physiological observations are what inspired this class of largely feedforward models that you heard about much today. So that's a brief tour of V1. Now, what's going on in V2? For a long time, people thought it was hard to tell the difference from V1 and V2. And I just thought I'd show you guys, this is a slide I stuck in, this is from Eero Simoncelli and Tony Movshon. And I think you guys have Eero teaching in the course a bit later, so he may say some of this. But V2 cells have some sensitivity to natural image statistics that V1 cells don't. And maybe I'll see if I can take you through this. So the way that they did this is you can simulate-- so this is all driven off of work that Eero and Tony have done-- especially Eero has done on texture synthesis. So you have these original images, and if you run them through a bunch of V1-like filter banks, and then you take a new image, a random seed, which is like white noise, and you try to make sure that it would activate populations of V1 cells in a similar way, there's a large set of images that would do that because you're just doing summary statistics, but these are some examples of them. For this image, this is one that one might look like. So you can see, to you, it doesn't look the same as this. But to V1, these are metamers, they're very similar in the summary statistics in V1. And then you start taking cross products of these V1 summary statistics and then you try to match those. And what's interesting is you start to get something that looks, texture wise, much more like this original image. And this is a big part of what Eero and others did in that work. And the reason I'm showing you this is that Tony's lab has gone and recorded in V1 and V2 with these kinds of stimuli, and the main observation they have is that V1 doesn't care whether you show it this or this. To V1, these are both the same, which says we have the summary statistics for V1 right in terms of the average V1 response. That's all I'm showing you here. The paper, if you want it, is much more detailed. But you go to V2 and there's a big difference between this, which V2 cells respond to more, and this, which they respond to less. And really one inference you can take from this is that V2 neurons apply a repeated-- another and like operator on V1. That's a simple inference that these kinds of data seem to support . And they also tell you that these and-like operators, these conjunctions of V1 statistics tend to be in the direction of the statistics of the natural world, that's naturalistic statistics. Now lots of controls haven't been done here to narrow in exactly what kinds ands, but that's the spirit of where the field is in trying to understand V2. Everybody thinks it has something to do with corners or a more complicated structure. But this is a way that current in the field to try to move these image computing models forward in V1 and V2. And Tony likes to point out that this is one of the strongest differences that you see between V1 and V2, other than the receptive field sizes. So I think that's quite some exciting work if you don't know about it on V2. OK, then you get up into V4 and things get much murkier. So what's going on in V4? Well, let me just briefly say that one of my post-docs-- this is more recent work just because it builds on that earlier work. This is Nicole Rust, when she was a post-doc in the lab, compared V4. She actually compared it to IT. I'll skip that. But she was using these Simoncelli scrambled images. These are actually the texture images from-- these are the original images and these are the texture versions. So this should look like a textured version of that. You can see that these algorithms don't actually capture the object content of these images. And what Nicole actually showed is that similar to what you just saw there, in the earlier work like V1, V4 doesn't care about the differences between these. It responds similarly, as a population, to this and this, and this and this, and this and this. But IT cares a lot about this versus this. So this is just repeating the same theme, the general idea that you have and -like operators that we think are aligned along the ventral stream that are tuned to the kind of statistics that you tend to encounter in the world. And this is some of the evidence for it in V2, and then later in V4, and IT, and Nicole's work, if you piece that all together. When you go to a place like V4, remember V4 is now like three levels up. And what does V4 do? Look, this is Jack's work in 1996. This is from Jack Gallant when he was working with David Van Essen. And people had some ideas that maybe there are these certain functions that V4 neurons like, and they would show these-- the same thing people have done in V2, they would show a bunch of images like this and figure out, well, does it like these Cartesian gratings or these curved ones. And you know what, you get out of this is, you could tell some story about it, but you get a bunch of responses out of it. The color indicates the response. And you kind of look at it, and people would tell some stories, but it really was just kind of like tea leaves. Here's a bunch of data, we don't really know what these V4 neurons were doing. This was a science paper, so you could go back and read it. And then Ed Connor and Anitha Pasupathy worked together a few years after that to try to figure out more about what V4 neurons do. And they did things like take images like this, which were isolated, and try to cut them into parts, like curved parts, pointy parts, curved, concave, convex. And this was motivated off of some psychology literature. And they would define these based on the center of the object. So this wasn't an image computable model, it was just a basis set that they built around these silhouette objects. And so they made this basis set about any kind of silhouetted object they like here. They hypothesized that they could fit the responses of V4 neurons in this basis set. And this was their attempt to do it. They could actually fit quite well. And that's kind of what's being shown here. Here's the response of a V4 neuron. The color indicates the depth of the response. You can see, this is sort of like that previous slide, you're looking at tea leaves. It looks complicated, but under this model they were able to, in the shape space, explain about half of the response variants of V4 neurons. The upshot is, that V4 curve is about some combination of curves. And then later, Scott Brincat, with Ed, went on into posterior IT and showed that maybe some combinations of these V4 cells could fit posterior IT responses quite well. So if you read the literature in V4 and IT, you'll come across these studies. And they are important ones to look at. Unfortunately, they don't give you an image computable model of what these neurons are doing. But it's some of the work that you should know about if you want to look in V4 or early IT, so I'm telling it to you. So let me go on to IT, which is what I want to talk about for the rest of today. Again, I'm talking about AIT and CIT. And I'll just quickly say that the anatomy, again, suggests that the IT is the central 10 degrees. And even though V1, V2, and V4 cover the whole visual field, if you make injections in V4, that's shown here, where you make injections in the more peripheral parts of the V4 representation, which is up here, that you don't get much projection into IT, which is here. You don't see much green color, whereas, you make projections in the center part of V4, these red sites here, you see much more coverage into IT, which is shown here. So when I say 10 degrees, that's rough. Everything in biology is messy. But this is some of the evidence, beyond recordings, there's anatomical evidence that as you go down into IT, you are more and more focused on the central 10 degrees. OK, let me talk about a little bit of the history of IT recordings. This is when people got excited about IT, in the 70s. This is work by Charlie Gross, who's one of the first people to record an IT cortex. And I'll show you what they did here. This was in an era where, remember, Hubel and Wiesel had just done their work in the '60s. And they recorded from the cat visual cortex. And they had found these edge cells, and they ended up winning the Nobel Prize for that. So it was the heyday of like, let's record and figure out what makes cells go. So they were brave enough to put an electrode down an IT cortex in 1970 and said, what makes this neuron go. Remember, that's an encoding question, what's the image content that will drive this neuron. And it's fun to just look back on this and what they were doing. So they didn't have computer monitors. They were actually waving around stimuli in front of the animals. This is an anesthetized animal on a table. This is a monkey. Actually, they started with a cat and then they later went to monkey. The use of these stimuli was begun one day when, having failed to drive a unit with any light stimulus-- that probably means spots of light, edges things that Hubel and Wiesel had been using. We waved a hand at the stimulus screen, they waved in front of the monkey, and elicited a very vigorous response from the previously unresponsive neuron. And then we spent the next 12 hours-- so the animal's anesthetized on the table, their recording from this neuron. It's 12 hours because nothing's moving, so you can record for a long period of time. So singular neuron, they're recording, listening to the spikes. We spent the next 12 hours testing various paper cut outs in attempt to find the trigger feature. You can see, that's a Hubel and Wiesel idea, what makes this neuron go. What's the best thing, that's become a lot of what the field spent time doing. Trigger feature for this unit, when the entire stimulus set were used, were ranked according to the strength of the response that they produced. We could not find a simple physical dimension that correlated with this rank order. However, the rank order of adequate stimuli did correlate with similarity for us, that means psychophysical judged, to the shadow of a monkey hand. So these are their rank order of the stimuli. And they say look, it looks like it's some sort of hand neuron. That's all I know how to describe it. I can't find some simple thing on here. So this kind of study then launched a whole domain where people started to go in to record these neurons and they found interesting different types. Bob Desimone, who worked with Charlie Gross, later showed much more nicely under more controlled conditions, yes, there are indeed neurons that respond. You can see more to these hand-- this is the post stimulus time histogram, lots of spikes, lots of spikes, lots of spikes-- respond more to these hands than to these other kind of stimuli here. So you could say, these neurons have tuned to specific combinations of high selectivity. You'll hear from Winrich that others had shown that you could record some of the neurons are really like faces that you could find, and not so much hands. So you could find neurons that seem to have some interesting selectivity in IT cortex. And then others later went on to show in a number of studies-- this is from Nico Logothetis' work of a number of years later. It's just one example that this selectivity had some tolerance to, say, the position of the stimulus, that's what's shown here. The fact that these bars are high just means that it tolerates movement in where the-- sorry, this is size, degrees of visual angle. This is position, moving the stimulus around. So this was known for a number of years that there's some tolerance to position and size changes at least. OK, so I'm putting these up and you say, there's some selectivity and there's some tolerance. And that should remind you of what we already said in V1, there's some selectivity, simple cells. There's some tolerance, complex cells. So you have the same themes here, just different kinds of types of stimuli being used. Then people really went on, in the 80s especially, and said, let's go after this trigger feature. And Tanaka's group really went after this really hard. Tanaka's group would find the best stimulus they would find, dangle a bunch of objects in front of a recorded neuron, find the best out of a whole set of objects, and then they try to do a reduction. They'd try to figure out, how can I reduce this. This is their attempt to reduce the stimulus to its features without lowering the neural response. So high response, high response, high response, high response, high response, suddenly I do this, the response drops. I do this, the response drops. And they have lots of examples of this. And they want you to try to get to the simplest thing that could capture the response. And when they did this, they would take stimuli like this, and end up with stimuli that looked like that. Now, many of you should probably start to wonder here, there's lots of paths for stimulus space. It's not clear that these are elemental in any way. There's lots of ways that you can show with modeling that you can get easily lost in this space of navigating around here. This is just, again, a history of the work. This is the kind of things that people were doing. And then from that, they presented what we think of as the ice cube model of IT, that I think is actually still a very reasonable approximation. They not only showed that neurons tended to like certain relatively reduced stimulus features, not full objects, but that they are gathered together. So these are millimeter scale regions of IT that nearby neurons, within a millimeter or so, have similar preferences. They're not just scattered willy-nilly throughout the tissue. When you go record nearby neurons, they're similar. So there's some mapping within IT cortex. This is schematic here. This is optical imaging data of IT cortex also from Tanaka's group that show you that these different blobs of tissue get activated by different images shown here. And I'm just showing you the scale of this, it's around a little less than a millimeter. And our lab has evidence of this too. So there's some sort of spatial organization in IT, but we really don't really yet understand the features, these elemental features yet, or at least, not at this time. Then later, there's lots of beautiful work in IT. Again, I'm probably not telling you all of it. Some of the most exciting work recently-- and you'll hear about this from Winrich, that people started to use fMRIs. So Doris Tsao and Winrich Freiwald and Marge Livingstone all together started to use fMRI data to compare faces versus objects. This was motivated from human work, by work like Nancy Kanwisher lab and others. What they found was that in monkeys, you could find different parts that would show up, what are called face patches, where you have a relative preference for faces over objects. Again, I don't want to take all of Winrich's talk here, but you have these different patches here. And then what's really cool is, you go in and record from these patches and then you find a very enriched locations for face neurons. And these enriched locations were known from a number of other studies. But this is a nice correlation between functional imaging and this enrichment of these face cells. And that's what's shown here, that these neurons respond mostly to faces and not so much other objects. Although, you see they still sort of respond to these. So this kind of says fMRI and physiology are telling you similar things. It also tells you there's some spatial clumping, at least for face-like objects, at a scale of a few millimeters or so, the size of these patches. OK, so that's larger scale organization. This is data from our own lab that shows the same thing. Maybe I'll just skip through this in the interest of time-- that we can map and record the neurons very precisely, map them spatially and compare that with fMRI. So this is just a larger field of view maps of the same idea. So what we have then, just to wrap up this whirlwind tour of the ventral stream, is that we had some untangled explicit information. And what I want to try to convince you of now, is that-- I've told you about the ventral stream, but I'm going to try to tell you that, in IT cortex, this is a powerful representation for encoding object information. And then we'll take a break because we've already probably been going a while. Yeah, about 10 more minutes and then we'll take a break. So what I've told you is, I've led you up the ventral stream, I've given you a bit of the history, so now let's talk about IT more precisely. So now this is work from my own lab. You go in and record IT. You go record extracellularly. You travel down into IT cortex, which is down here. And you record from this. And similar to what you saw, another version of what you saw from Charlie Gross or Bob Desimone, you show a bunch of images. And they could be arbitrary images. You take an IT recording site, and see these little dots, those are action potential spikes out of a particular IT site. And these are repeatable. You have some Poisson variability here. But you see that there's more spikes here, there's little more here, less here, less there. These images are all randomly interleaved when you collect the data, as I'll show you in a minute. And you go to different sites and it likes different images. So there is certainly some image selectivity. This should not be surprising because I already showed you this from previous work. This is just data from our own lab. You can also see now that you are looking closely at the time lag, remember, I said around 100 milliseconds stimulus on. Stimulus off, the stimulus is actually off before the spikes actually start to occur out here in IT because, again, there's a long time lag, 100 milliseconds. OK, so that's what the neural responses look like. I don't know if you guys can hear this, maybe I should have hooked up audio. Maybe you might be able to hear-- this is actually a recording that Chou Hung did when he collected his data in my lab for the early studies we did in the lab. I don't know if you guys can hear. [STATIC] [BEEP] [BEEP] [BEEP] Those high beeps are the animal getting reward for fixating on that dot. You're not even going to be able to parse that. I mean, you hear the spikes clicking by, those-- [STATIC] Those are action potentials. And I don't expect you to look at anything like, oh, it's a face neuron, or whatever. I just want you to get a feel for how those data were originally collected. This is a pretty grainy video. But you get the idea. You collect data like that. And again, you can find selectivity in those population patterns, as I just showed you. But then, Gabriel and Tommy and I, so the three of us, I think all in this room, way back when in 2005 said, well look, the population of IT might have good, useful information for solving this difficult object manifold tangling problem. It might be a good explicit representation. So we did a, what I call, early test of this idea. We took this simple image set from eight different categories that we had chosen. And there's good stories of why we chose those objects, if you like to hear them. But let me just say, simple objects, we moved them across position and scale, and we collected the responses of IT of a bunch of sites to changes to all these different visual images. And we showed them as I just showed you. We just showed them for 100 milliseconds. This is this core recognition regime, were just showing them for 100 milliseconds. And then we show another one, and they're just randomly interleaved. And from this, what you do is you could get a population set of data where we recorded 350 IT sites. Here's a sample of 63 sites. This is 78 images, the mean neural response here is the mean response to an image. This is 78 of the images we showed. There's nothing for you to read into here to say, other than, you have this rich population data. And now our question is, well, what lives in this population data that we've collected. Is it explicit with regard to categories? So we come back to what I showed you earlier about those tangled manifolds and said, we need simple decoding tools. Can a simple decoding tool look at that population and tell me what's out there? And again, we were using linear classifiers at the time, because we took that, as you heard from Haim as our operational definition of what a simple tool is. And if it could decode information about the object identity, then we'd say, well, that means, by that operational definition, this is explicit, available, accessible information, or just generally good. So if you imagine that the activity-- this is schematic. Each dot, this is neuron one, neuron two, and you could have a bunch of IT neurons. But if you can separate any object from all the other object, these points represent the population response to each image of an object. Remember, there's many images of each object. But if you could linearly separate that, that would mean it was explicit. And if you had a hard time separating it, this would be implicit. These are like tangled object manifold. This is Inaccessible, or bad, information. So we just-- we, and when I mean we, I mean Chou Hung, who led the study. Gabriel, Tommy, and I did this. We took the response of an image, like this one. It produced a population vector. Again, we recorded a bunch of neurons. We recorded them sequentially and then pieced together this population vector. So these are the spikes simulated off a population of IT. We could do various things. In fact, I think Gabriel did everything possible, as I remember at the time. And one of the things we did was just count spikes. One of the simple things, that turns out to work quite well, is count the spikes over 100 milliseconds. So this neuron counts spikes. That gives you a number, one number here, count spikes get one number. So you have n neurons, you get n numbers. So it's a point in a n dimensional state space where n is the number of neurons. And then we had already pre-divided the images into different categories, as shown here. These are the categories. And again, we just asked how well you could do faces versus non-faces, toys versus non-toys, so on and so forth. These are old slides. But you get the idea, is that basically, you don't need that many sites to already get to very high levels of performance on both categorization and identification. The interesting thing about this was that you could solve simple forms of this invariance problem in this representation quite easily. That if you just trained on the central objects, the center and size, the simple three degree size center position, and test it on the same thing, just held out repeats of this data, you did quite well. That's a baseline. But what's interesting is you test at different position and scale. And then you also do almost nearly as well. So you naturally generalize to these other conditions by training on these simple conditions. So this is evidence that the population is a good basis set for solving these kind of problems. A few number of training examples on this population then generalizes, well, across conditions makes the problem hard. So again, we published that a long time ago. This was an early step to say, look, the phenomenology looks right for the story that I've been telling you so far. You can't do this easily in earlier visual areas like V1, or simulated V1 or V4. And we later show that a number of ways. This is consistent with work I was showing you with Logothetis position tolerance, size tolerance, the selectivity. It's really just an explicit test of the idea population encoding. So the take home here is that there's this explicit object representation in IT. I didn't prove to you that this is the link, this predictive model to decoding yet. We're going to talk about that next. But this was some of the important population phenomenology that we did. What I try to tell you today-- hopefully I've introduced you to the problem of visual object recognition and the way we restricted it to core object recognition. We talked a lot about predictive models as being the goal, although I haven't presented much to you yet. Hopefully, that's the second part of the talk. I've given you a tour of the ventral stream. But it was a poor tour. I'm sure everybody i work with would say that you've neglected all this work because there's no way I can do that all in even a whole week. I just tried to hit some of the highlights for you. And I told you that the IT population seems to have solved a key problem, this sort of invariance problem that I set up. And one way to step back and say, over the last 40 years or so, from those early studies of Charlie Gross or even Hubel and Wiesel, we, the field of ventral stream physiology, we've largely described important phenomenology. Even that last study is population phenomenology. And so now we need these more advanced models. So the next phase of the field is developing and testing these predictive models that I've motivated at the beginning, but I haven't given you much of yet. So this was hopefully a bit of history and set context to where we are. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_52_Andrei_Barbu_From_Language_to_Vision_and_Back_Again.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ANDREI BARBU: All right. To start off with, perception is a very difficult problem. And there's a good reason why perception should be difficult, right? We get a very impoverished stimulus. We get, like a 2D array of values from a 3D scene. Given this impoverished stimulus, we have to understand a huge amount of stuff about the world. We have to understand the 3D structure of the world. If you look at any one pixel, you have to understand the properties of the surface that produce the elimination that gave you that pixel. You have to understand the color or the texture. You have to see the color of the light that hit that surface, the roughness of the surface, et cetera. So you have a very small channel, a very small window onto the world, and you have to extract a tremendous amount of information so that you can survive and not get killed by cars regularly. All right. That's exactly the problem we're going to talk about here, which is, how do we use our knowledge of the world to structure our perception? To actually modify what we see in order to be able to solve this problem? How do we take a small impoverished stimulus and extract a huge amount of information about the world around us? So let's start with a few examples where knowledge about the world really, practically changes what we see. So I'm Canadian. So I'm required to show you a Canadian flag in every talk. Here we are. You can take this flag and you can give it to any of you. You can give it to any kid and you can ask them, here's a marker. Put big red marks around the regions on this flag or in the regions in this image that are red. And it's pretty clear to all of us that there is a distinction between the red that's in the flag, and the bars, and in the Maple Leaf, versus the red that's actually in the background. And we can all tell those two apart. Except if you actually look at the pixel values, you open them in Photoshop, or GIMP, or whatever other program you want, you're going to notice that those pixel values are actually not particularly different. There's no threshold that you can choose that will separate the red on the flag from the red in the background. so you're doing a huge amount of inference just to solve this trivial, little problem of, what color is where? You're using knowledge about regions, knowledges about flags, knowledge about transparency, in order to figure out that the red flag is different from the red in the background. So this is a really practical way that your knowledge about the world really changes your perception. You're not seeing the colors that are really there. Another nice example comes from a paper by Antonino Torralba. So if you look at the scene, it's pretty blurry. And it has to be blurry because your visual system is so incredibly good, we have to degrade the input for you to see how poor it actually is if we take away some information. So this looks like a scene. And the background looks like a building. In the foreground, you can there's maybe a street. And the thing on the street kind of looks like a car. Does it look like a car to everybody? Awesome. We can look at a slightly different image. We can look at a very similar scene. Again, same building in the background, same street in the foreground. Now, there's kind of a blob in the foreground. And it looks as if it's a person. It looks like a person to everyone, right? Awesome. Well, the only problem is the blob on the left is exactly the same as the blob on the right. It's difficult to believe me, but you can find these two images online and in his paper. You can open them up in your favorite image viewer. You can zoom in and you'll see they are pixelized completely identical. So you're using a tremendous amount of information to put together the fact that buildings and streets-- when you see these horizontal streaks, it means a car. And when you see these vertical streaks, it means people. And this really changes how you see the world. And it changes it to the point where you actually, probably don't believe me that these two blobs are the same. And I couldn't believe it either until I really zoomed in and checked. So you can see lots of interesting effects where your high-level knowledge of the world is structuring your low-level perception, and it is actually overriding it. You've seen this example with the hammer, where you were unable to recognize what's going on in a small region. But when I give you the rest of the context, you can tell that it's a hammer. And when you see the whole video, you actually don't see the hammer disappear. You're filling in information from context in single images and you're filling in information from context in whole videos. But if we dig in what's going on here just a little bit more, what's going on is somewhere inside your head, there's something resembling a hammer detector, right? So you run a hammer detector. And you ran that hammer detector over that little region. And it said, I'm not so sure. I'm not very confident about what I see in this little region. And somewhere inside your head, there's some detector or something that can recognize someone hammering something. So if we look at sort of a more traditional computer vision pipeline, what you would do is you would run your hammer detector. You would take your hammer detector. You would use that knowledge in order to recognize hammering in the scene. And at the end, you would say, I'm really confused because my hammer detector didn't work very well. The reason why you can actually do this is because you have a feedback. You were able to recognize the hammering event as a whole. And that lets you upgrade the scores of your hammer detector, which is very unreliable in this case. So feedback was really critical in being able to understand this scene. Unfortunately, pretty much all of computer vision is feed forward, even though most of your visual system has, for the most part, feedback connections. More feedbacks than feed forwards. So in this talk, we're going to talk about that feedback. And we're going to see a way that we're going to build this feedback in a principled way. That if we choose right detections-- right algorithms and right representations from our low-level perception, we're going to be able to combine it with our high-level perception of the world. So we've seen that perception is very unreliable. The top-down knowledge really affects your perception. And that, what you're going to see in a moment, is that one integrated representation can be used for many tasks. The advantage of these feedbacks goes beyond just better vision. It lets you solve a lot of different problems that look very, very distinct. But actually, turn out to be very, very similar. So one problem is recognition. I can give you a picture of a chair, and I can tell you, what is this? And you can tell me it's a chair. Or I can give you a picture and I can give you a sentence, this is a chair. And you can tell me, I believe you. This is true. There's also a completely different problem, which is retrieval. Related to recognition, right? How about I give you a library of videos and ask you to find me the video where the person was sitting on the chair. And you can solve that problem. You can also solve a problem like generation. I can give you a video and I can tell you, I don't know what's here. Please describe it to me. So if you see the scene, you can say, what's on the screen? Well, there's a whole bunch of text on the screen. You can also do question answering. You can take an image like this. I can ask you a question. What's the color of the font? And you can say, the font is white. So you were able to take some very high-level that's in my head that got transmitted to your head. You were able to understand the purpose of this transmission, connect it to your perception, figure out the knowledge that I wanted extracted from your perception, and give it back to me in a way that's meaningful to me. Even more than this, you can disambiguate. You can take a sentence that's extremely ambiguous about the world and figure out what I'm referring to. And we do this all the time. That's really what makes human communication possible, right? The fact that most of what I say is extremely ambiguous. That's why programming computers is a real pain, but talking to people is generally easier depending on the person. You can also acquire knowledge, right? You can look at a whole bunch of videos. If you're a child, you sort of perceive the world around you. Occasionally, an adult comes, drops a sentence here or there for you. But what's important is that no adult ever really points out what the sentence is referring to. You don't know that approach refers to this particular vector when someone was doing some action. You know that Apple refers to this particular object class. Who knows what it could mean? But you get enough data, and you're able to disentangle this problem of seeing weakly-supervised videos paired with sentences. And we'll see how you can do that. Pretty much everything I'll talk about is going to be about videos. And I'll tell you a story about how I think we can do images as well. There are a bunch of other problems that you can solve with this approach. I'm sorry? AUDIENCE: So those images go with the video? ANDREI BARBU: Yes. So rather than doing videos, we're going to do images. So one thing that you can do is you can try to do translation. We haven't done this. We're going to be doing this in the fall. We have two students. And I'll tell you at the end what the story is for how you're going to do a task that sounds as if it's from language to language, but you're going to do it in a grounded way that involves vision. Even more than that, you can do planning. And I'll tell you about that at the end a little bit. And finally, you can also incorporate some theory of mind. And that's actually the project that the students are doing as part of summer school. And I'll say a few words about that. What's important about this is the parts at the top, we understand better. We've published papers about them. The parts at the bottom are sort of more future work, and I'll say less about them. Well, one important part about this is I've shown you all these tasks. But you really have to believe me that humans perform these tasks all the time. Every time you're sitting at a table and you ask someone, give me a cup. That's a really hard vision language task. There may be 10 different cups inside of you on the table if you're sitting on one of the big, round tables. And you have to figure out, what object am I talking about? What kind of cup am I talking about? Which cup would I be interested in? If I drank out of the cup, I would expect that you give me my cup, not your cup. Otherwise, let me know. I will sit at different tables from now on. If I ask you, which chair should I sit in? Again, you have to solve a pretty difficult problem where you look at chairs. You figure out what I mean by which chair should I sit in. Is it that there's a chair that's reserved for someone. Is it that a chair is for a child and I'm an adult, that kind of thing. You can say something like this is an apple. And when you say that to a child, you're saying it for a particular reason. To convey some idea. You have to coordinate your gaze with the other person's gaze to make sure you're drawing their attention to the real object. Even more than that, you can say very abstract things, like to win this game, you have to make a straight line out of these pieces. That means that we both agree on what a piece is. That I've drawn your attention to the right idea of a piece. That we agree on what a straight line means on this particular board. There's a lot of knowledge that goes into each of these. But the important part is that they're each grounded in perception. We have to agree on what we're seeing in front of each other in order to be able to exchange this information. And pretty much everything that we do in daily communication is a language-vision problem on some level. All right. So if we believe these problems are important, we can make one other observation, which is none of you got training in most of these problems. No adult ever sat you down and said, OK. Now, you're four. Now I'm going to teach you how to ask questions about the real world. Or no one sat you down and said, OK. Now, let's talk about language acquisition. You're supposed to do gradient descent. So what's important is you have some core ability that's shared across all of these tasks? And you're able to acquire knowledge, maybe in one of these tasks or across all of these tasks. You're able to put it together. And as soon as you have this knowledge, you can use this for all these other tasks without having to learn anything else. And that's what we're going to see. And the core of this that we're going to focus on is recognition. So we're going to build one component. This is that engineering portion of the talk. We're going to build one scoring function that takes a sentence in a video and gives you a score. How well does this sentence match this video? If the score is 1, it means the system really believes the sentence is depicted by the video. If the score is 0, it means the system really believes the sentence does not occur anywhere in this video. And this is the basic thing that's going to allow us to connect our top-down knowledge about what's going on in the world with our low-level perception. And after we have this, we're going to see how we reformulate everything in terms of this one function, so we don't have to learn anything else about the world. All right. So we said we need this one function, scoring function between sentences and videos. So let's look at what we would need to have inside this function in the first place. If I give you a video like this, it's just a person riding a skateboard. And I give you a sentence. The person rode the skateboard leftward. Well, I can ask you, is the sentence true of this video? Indeed, it is true. But let's think about what you had to do in order to be able to answer this question. Well, you had to at some level decide there's a person there. I'm not saying that you're doing this in this order in your brain. I'm not saying that they have to be individual stages. I'm not saying you have to have to object detectors. But at some point, you had to decide there really is a person there somehow. You also had to decide there is a skateboard there. You had to look at these objects over time, or at least-- in at least one or two frames decide that they have a particular relationship, so that the person isn't flying in the air and the skateboard continues onwards. And you had to look at this relationship and decide, yeah. OK, this is writing. And it's happening leftward. So you have to have these components on some level. You've got to see the objects. You've got to see the relationships, the static and the changing relationship between the objects. And you have to have some way of combining those together to form some kind of sentence you can represent that knowledge. And that's what we're going to do. Everything I described to you is this feed-forward system, right? We had objects. We have tracks. We take tracks and we build events. Events like ride. And we take those events together and we form sentences out of them. And there's this hard separation, right? It's easy to understand a system where what you do is you have objects, tracks, events, and sentences. And you use tracks in order to see if your events happened and your events in order to see if a particular sentence occurred. So that's what we're going to describe first, and then we're going to see how, because we're going to choose the right representations for each of these, these feedbacks become completely trivial and very natural to implement. All right. We need to start with some object detections. Otherwise, we're just going to hallucinate objects all the time. Any off-the-shelf object detector that you choose will sometimes work. Here, we ran a person detector in red and a bag detector in blue. It will sometimes give you false positives. Trees are often confused for people. I guess we're both two vertical-- two long, vertical lines. And sometimes, you get false negatives. Sometimes, a bag is so deformable that you think the person's knee is the bag. Lest you think that object detection is solved, it actually isn't. So if you look at something like the image net challenge, mostly people talk about the image classification. The stuff in light blue. And they're saying that there's 10% error. These days, there's 5% error on this. But that's really not what you're doing in the real world. You're not classifying whole images. When you see an image in the real world, what you're doing is you're trying to figure out what objects are where. And that's the red part. That's the part where you have an average precision of 50%. In other words, the object detector really, really, really sucks. Most of the time, it's going to be pretty wrong. It's very, very, very far away from how accurate you are. If your object detector was that bad, you would die every time you crossed the street. All right. So we believe that object detection doesn't work well. In order to fix this, because somehow we have to be able to extract some knowledge about the video that's pretty robust for us to be able to track these objects and recognize these sentences, we need to modify object detectors a little bit. We're going to go into our object detector. And normally, they have a threshold. At some point, they learn that if the score of this detection is above this level, I should have confidence in it. And if the score of this detection is below this level, I shouldn't have confidence in it. And what we're going to do is we're going to remove that threshold. We're going to tell the object detector, give me thousands or millions of detections in every frame. We're going to take those detections and we're going to figure out how to filter them later on. All right. The way we're going to do this-- and this is the only slide that's going to have any equations. And it's just going to be a linear combination. All we're going to do is we're going to take every detection in every frame of this video. We're going to arrange them in the lattice. In every column of this lattice, we're going to have the detections for one particular frame. And in essence, what we want is one detection for every object for every frame. In other words, we want the path through this lattice. We want to select one detection in every column. But we want tracks that have a particular property, right? If I'm approaching this microphone, you know that you expect to see me kind of far away. Then, getting closer. Then, eventually I'm close to the microphone. You don't expect me to be over there, and then to appear over here as if I've teleported. So we want to build this intuition that objects move smoothly. And that objects move according to how they previously moved, right? It's not like someone moves from one frame 10 pixels over. Then, the next frame, they move 10 pixels to the left. And they keep oscillating between the two. And that's what we're going to do. I'm not going to talk about how we compute this. It's really trivial. If you know about optical flow, you can do it. But basically, what we want is a track where we don't hallucinate the objects. So every node in our resulting detections should be strong. If we ignore the strength of the object detector, we're just going to pretend that there are a whole bunch of people in front of us. And every edge should also be strong. In other words, when we look at two detections from adjacent frames, if I have a person over here in one frame and a person over here in another frame, I shouldn't really think that's a very good person track. But if I have a person over here that kind of moved to the right the previous frame and I have a new detection that's just slightly to the right of that one, I should expect that it's a much better track. So that's all we do. And encoding this intuition is very, very straightforward. It's just a linear combination. So the score of one path, the score of the track of an object, is just the sum of your confidence in the detections. So in every detection and every frame, along with the confidence that the object track was actually coherent. All right. So is the only equation we're going to see in this talk. And it's just a linear combination. But it will come back to haunt us several times before the end. All right. So we use dynamic programming. We find the path through this lattice. And this is a tracker. And actually, Viterbi did this in 1967 for radar. This is not a new idea. Here, we ran it for just a computer vision task where we just wanted to track objects. We ran a person detector and a motorcycle detector, but we don't have a person standing up and a person sitting down detector. So the tracker is good enough that it can keep the two people separate from each other, despite the fact that they're actually pretty close in the video. So you see we do a pretty decent job of tracking all the objects until they get pretty small in the field of view and the object detector doesn't work well anymore. All right. So now what we have are the tracks of objects. We can see object motion over time and from a video. And somehow, we have to look at these tracks and determine what happened to them. Was someone riding? Was someone running? Was someone bouncing up and down? In order to do this, we're going to get some features from our tracks. You can look at the track in every frame and you can extract out a lot of information. You can extract out the average color. You can extract out the position, the velocity, acceleration, aspect ratio. Anything that you want to get out of this frame knowing that this bounding box is there, you can compute and the algorithm doesn't care. All right. There's one small problem, though. Most of the time, we need more complicated feature vectors. So for example for ride, it's not enough to have a feature vector that only includes the person. You needed to look at the relative position between the person and the skateboard to determine-- that are actually going together and one isn't going right and the other one is going left. So for that, what we're going to do is we're going to build a feature vector for the agent of the action in the case of ride and a feature vector for the instrument-- the skateboard. We're going to concatenate the two together, so we get a bigger feature vector. And then we're going to have some extra features that tell us about the relationships between these two. So we can include things like the distance, the relative velocity, the angle, overlap. Anything that you want to compute between these two bounding boxes in this frame, you're welcome to compute. All right. And if you build this feature vector between this person and the skateboard, you could recognize the person rode the skateboard in this video. If you build a different feature vector, for example between these two people, you could recognize the person was approaching the other person, or the person was leaving the other person. If you build a feature vector between the skateboard and the other person, you could recognize the skateboard is approaching the person, et cetera. So depending on which feature vector you build, you can recognize different kinds of actions. So when we have our tracks, we know how the objects moved in these videos. We get out some feature vectors from our tracks. And what we need to do is decide what these feature vectors are actually doing. Is the person riding that skateboard? The way we're going to do this is using hidden Markov models. Hidden Markov models are really simple. All they assume is that there is a model of the world that follows a particular kind of dynamics. In this case, imagine that we have an action like approach. I'm far away from the object. I get closer to the object. Eventually, I'm next to the object. So this action, for example, has three states. One where I was far, one as I was getting nearer, one when I was very close. And we have a particular transition between these states, right? We already said that I don't teleport. So I shouldn't be able to go from being far away to being near. So you should expect me to go from the first state to the second state and to the third state without going from the first to the last. In each state, you have something that you want to observe about me, right? You want to really see that I'm far away in the first state, that I'm getting closer in the second, and I'm actually there in the third. So we have some model for what we expect to see in every state and we can connect this with our feature vectors. So the idea is there's some hidden information behind the motion of these objects. And we're going to assume that hidden information is represented within HMM. And what we need to recover is the real state of these objects. So if you see a video of me moving towards this microphone, you have to recover some hidden information of, which frames was far away in? Which frames was getting nearer? And which frames was I actually next to the object? For now what we're going to do is we're going to assume that we have one of these hidden Markov models for every different word. So for every verb, we have a different hidden Markov model. There's one for approach. There's one for pickup. There's one for ride, et cetera. And if you want to tell me what's going on in this video, you just have a big library of hidden Markov models. You apply every one to every video. You have some threshold. And anything above that threshold, you say happened. And you produce a sentence for it. OK. If we look at how you actually figure out what this hidden information is, what state am I in when I'm approaching this object, it looks a lot like the tracker. What you have is you have to make a choice in every frame. Your choice is, which state is my action in? Is it state 1 through 3, or some other state? In the same way that in tracker, you have to make a choice. You have to choose, which detection is the system in for each frame? And here, you also have edges. Edges tell you, how likely am I to transition between different states in my action? And every node also has a score. It's the score of, did you actually observe me doing what you're supposed to observe me doing in every action? So if you're saying I'm in the first state, did you actually see me stationary and far away from that object? And what you want is a path through this lattice in the same way that we had a path before. And a path just means you made a decision that I'm in state 1 in the first frame or in state 1 in the third frame, et cetera. And that's just the linear combination of the scores. So it's the same equation we saw before. So here's an example of this sort of feed-forward pipeline in action. We ran it over a few thousand videos. It produces output like the person carried something, the person went away, the person walked, the person had the bag. It's pretty limited in its vocabulary. It has 48 verbs, about 30 different objects, a few different prepositions. And it even works when the camera moves. So the person chased the car rightward, the person slowly ran rightward to the car. And it should also probably say the person had a really bad day, but that's for the future. So we've seen this feed-forward pipeline. We've seen that we can get objects. We can get tracks. We can look at our tracks, get some features, run event detectors, take those event detectors, and produce some sentences. And now, all we're going to do is we're going to break down the barriers between these and show you how you can have feedback in a really, really simple way. All right. So first, let's combine our event detector and our tracker. Because what that's going to say is, if you're looking for someone riding something, well, you should be biased towards seeing people that are riding something. So in the occlusion example, if you see someone go behind some large pillar, well, you might lose them. But you have a bias that you should reacquire someone riding a skateboard after they leave the pillar, which you don't have if you just run the tracker independently from the event detector. So the way we're going to put them together is very, very easy. There's a reason why these two look completely identical and why the inference algorithm between them is identical. Right now, what we're doing is we have a tracker on the left, or on your left. And we have an event recognizer on the right. Right now, we're running one, and then we're feeding the output of one into the other. Basically, we run one maximization, and then we run another maximization. And all we're going to do is move the max on the right to the left. And you get the exact same inference algorithm. The intuition behind this is you have two lattices. And you can take the cross-product of the lattices. Basically, for every tracker node, you just look at all the event recognizer's nodes and you make one big node for each of those. And every node represents the fact that the tracker was in some state and the event recognizer was in some other state. So we have a node that says the tracker chose the first detection. The event recognizer was in the first state. We have another node that says the tracker chose the second detection. The event recognizer was still in the first state. And you do this for every detection. Then, you do the same thing for the event recognizer being the second state, et cetera. So you're just taking a cross-product between all of the states. Does that make sense? Another way to say it is that we have two Markov chains. One that's observing the output from the object detector and another one that's observing the output of the middle Markov chain. And you do joint influence over them. And the way you can do joint inference is by taking the cross-product. Basically, you have two hidden Markov models. One that does tracking and one that does event recognition. And all we're going to do is joint difference in both of them. So rather than trying to choose the best detection, and then the best state for my event, I'm going to jointly figure out, what's the best detection if I assume I'm in this state? What's the best detection if I assume I'm in this other state? And at the end, I'll pick the best combination. Make sense? So this is a way for your event recognizer to influence your tracker, because now you're jointly choosing the best detection for both the tracker and the event recognizer. So that was really, really simple. We put in a tremendous amount of feedback by just taking a cross product. So we can see this in action. I'm going to show you the same video twice. The person is not going to move in this video at all. What we told the system is that a ball will approach a person. That's it we didn't tell them which person. We didn't tell the system which particular ball, which direction it's going to come from, or anything like that. The top detection in this frame happens to be the window. It's a little hard to see. It's quite a bit stronger than the person. But because neither the window nor the person ever move in this scenario, the tracker can't possibly help you. You have no motion information. The only way to override that window detection is to know something else about the world. So we told it that the ball will approach. And you can see that for the combined tracker and event recognizer. Indeed, when the ball comes into view, it will make more sense. So the reason why we actually-- coming back to the question that you asked. Why we don't run it over small windows is because we want this effect of knowledge that's much, much later on in the video. Like the fact that the ball will enter or approach that person as opposed to that window to actually help you much earlier in the video. If you run it over small windows, you lose that effect. So here, you track the person correctly from the very first frame despite the fact that the ball only comes into view halfway through the video. There are many more examples of this. In this case, it's a person carrying something. Here, we told the system one person's carrying something. And you'll see when the person moves, we can detect the person and the bag. The object detector fails much, much earlier because the person was deformable. So we've seen how we can combine together trackers and events recognizers. And now, we need to add sentences. And the trick for adding sentences is going to do more of the same. What we're going to do is we're going to take a tracker. It's just exactly what we saw before. And what we just did a moment ago is we combined it with an event recognizer. Well, there's no reason why we can't add more trackers. We actually kind of did that, right? We were tracking both a person and a ball a moment ago. So we can take an even bigger cross-product, have multiple trackers, and have multiple words. So all we're saying is, I have, say, five trackers that are running. I have five words that I want to detect, or 10 words that I want to detect. And I want to make the choice for all of these 5 trackers jointly, so that they match all of these 10 words. In this picture, basically our words are kind of-- our sentences are kind of like bags of words, right? Every word is combined with every tracker. But we know if you look at the structure of a sentence like the tall person quickly rode the horse, not every word refers to every object in the sentence. So you can run your object detectors over your video. And you can look at your sentence. And you can look at the nouns and say, OK. So I have people and horses inside the sentence. And you can say, OK. Well, if I have people and horses, I need two trackers. But you can look a little bit more at your sentence and see that, oh, well, it's the other horse. So you analyze your sentence and you can determine there are three participants in the event described by the sentence. There's a person and two horses. One's the agent. One's the patient-- the thing that's being ridden-- and one's source-- the thing that is being left. Does that make sense? Awesome. So now, given a sentence, we know that we need n trackers. And for every word, we can have a hidden Markov model. We can have a hidden Markov model for ride. It's just another verb. And we just have to be careful how we build a feature vector for ride. Because if we build it in one way, we're going to detect the person rode the horse. And if we build it in the opposite way by concatenating the vectors the other way around, we're going to detect the horse rode the person, which is not what we want. We can also detect tall. Tall is kind of a weird hidden Markov model, right? It has only a single state, but it's still a hidden Markov model. It just wants to see that this object is tall. So maybe its aspect ratio is more than the mean aspect ratio of objects of this class. But nonetheless, it still fits into this paradigm. We can do the same thing for quickly. We can have an HMR for that. We can do leftward. We can do away from. Away from looks a lot like leave. It's the same meaning. And basically, we end up with this bipartite graph. At the top, we have lattices that represent words. Each word has a hidden Markov model. And in the middle, we have lattices that represent trackers. We can combine them together according to the links. And you can get these links from your favorite dependency parser. You can get them from Boris's START system. Any language analysis system will give you this. So this is actually all the heavy lifting that we have to do. Everything from now on is kind of eye candy. One thing that we really wanted to make sure that system was doing is that we could distinguish different sentences. So we tried to come up with an experiment that is, in some way, maximally difficult where events are going to happen at the same time. So you can't use time in order to distinguish them. And the sentences only differ in one word or one lexical item. So in this case, we have a sentence like the person picked up an object and person put down an object. There are two systems that are running. One is running on one sentence. One is running on the other sentence. You're going to see the same video played twice side by side. And you can already see that one system, when we primed it to look for pickup, it detected me picking up my backpack. And then, the other one it detected one of my lab mates picking up a bin. So the only way you could focus its attention on the right object is if it understood the distinction between these two sentences, or if it was able to represent them. So we can play this game many, many times over. We can have it pay attention to the subject. Is a backpack approaching something or is a chair approaching something? We can have it pay attention to the color of an object. Is the red object approaching something or a blue object approaching something? We can have it pay attention to a preposition. Is someone picking up an object to the left of something or to the right of something? And we have many, many dozens or hundreds of these. And I won't bore you with all of them. But the important part is we can handle lots and lots of different parts of speech. And we can still represent them and we can still be sensitive to these subtle distinctions in the meanings of the sentences. All right. So we did all the hard work. And we actually built this recognizer-- the score of a sentence given a video. And now, it turns out that we can reformulate all of these other tasks in terms of this one score. And it's going to do all the heavy lifting for us. So when we tune the parameters of whatever goes into the scoring function, we're going to get the ability to do all these other tasks. So let's look at retrieval. It's the most straightforward kind of task, right? It's what YouTube does for you. You go to YouTube. You type in a query, and YouTube comes back with some answers. So let's see what YouTube actually does. If you look at YouTube. And if you look at something like pickup, you get men picking up women. If you look at approach, you get men picking up women. If you look at put down, once upon a time you did get men picking up women, but rap is now more popular. If you ask something more interesting-- the person approached the other person-- you don't get videos where people approach each other. You get videos about how you should approach women. I didn't select these. I typed them in and this is just what happened. If you type in, like the person approached the cat, you get lots of people playing with cats, but no one approaching cats, including a link that's kind of scary and an Airbus landing. And I have no idea what that means. So what we did is we built a video retrieval system that actually understands what's going on in the videos as opposed to just looking at the tags that the people apply to these videos. People don't describe what's going on. People describe some high-level concept. So we took a whole bunch of object detectors that are completely of the shelf for people and for horses. And we took 10 Hollywood movies. Nominally, they're all Westerns. They involve people on horses. And the reason why we chose people on horses was because people on horses tend to be fairly larger in the field of view. And given that object detectors suck so much, we thought we should kind of help the system along s best we could. So we build a system. It's a system that knows about three verbs. It knows about two nouns, person and horse. It knows about some adverbs, quickly and slowly. It knows about some prepositions, leftwards, rightwards, towards, away from. And given this template, you can generate about 200, 300 different sentences. So we can type in something like the person rode the horse. And we can get a bunch of results. So you can see, we were in 90% accurate in the top 10 results. You can see these are really videos of people riding horses. The way this works is we took one of these long videos. We chopped it up into many small segments and we ran over each individual segment. You could run it over the whole video, but then it would just classify the whole video because it's an HMM and would sort of adapt to the length of the video. We can also ask for other kinds of queries, like the person rode the horse quickly. You can see we get videos that really are quicker. We can ask for something more ambitious, like the person rode the horse quickly rightward. And we get videos where people are riding horses rightward. All right. So we did the hard work of building this recognition system. And we saw we can use it for another task, which is retrieval. But let's do something else. Let's do generation. Someone asked about generation earlier. Generation is very similar to retrieval. In retrieval, what we had was we had a fixed sentence and we searched over all our videos to see which ones were the best match. Here, we have a fixed video. And we're going to search over all our sentences. The only trick is you have a language model, so it can generate a huge number of sentences. But we're going to see that's OK. So we have a language model. It's very, very small model by Boris' standards, or the standard of NLP. We have only four verbs, two adjectives, only four nouns, some adverbs, et cetera. But the important part is even if we ignore recursion, we have a tremendous number of sentences. And this model is recursive, so we can really generate an infinite number of sentences from it. But nonetheless, it turns out that you can search the space of sentences very, very efficiently and actually find the global optimum. And the intuition for why that's true is pretty straightforward. You can think of your sentence as a constraint on what you can see in the world. The longer your sentence, the more constrains you have. So the lower the overall score is. So every time you add a word, the score can't possibly increase, right? The score has to always decrease. So basically, you have this monotonically-decreasing function over a lattice of sentences. And if you ignore the fact that you only have to search sentences, you can start off with individual words, aggregate words together. So you look at all one-word phrases. You can a two-word phrases, three-word phrases. Eventually, get out to real sentences. But because this is a monotonically-decreasing function, this is a very quick search. So you can start off with an empty set. You can add a word. For example, you can add carried. You can look at all the ways that you can extend carried with another word or two. So you get a phrase like the person carried. And you can keep adding words to it until you get to the global optimum. So given a video like this, where you see me doing something, you can produce a sentence like the person to the right of the bin picked up the backpack. And that's pretty straightforward. We built a generator in just a few lines of code as long as we had our recognition system. So you have this problem in question answering that you have to connect two sentences with a video. And instead of doing that, what we're going to do is we're going to make some connection between two sentences. So we're going to take our question. We're going to give it to something like Boris' system. And it's going to tell us this question, like, what did the person put on top of the red car? If you wanted to answer it, you would produce an answer like, the person put some noun phrase on top of the red car. So you can run the generation system exactly as was suggested. You seed it with this. You give it a constraint that what it has to produce next inside this empty gap is a noun phrase. And you're going to get out the answer. Another way to think about this is you have sort of a partial detector. You look inside the video to see where it matches. You choose the best region where it matches, and then you complete your sentence. And you get an answer like the person put the pair on top of the red car. There's one small problem with question answering, and it differs from generation in one way. So imagine that we're in a parking lot and there are a hundred white cars inside this parking lot. And you come to me desperate and you say, I lost my keys. And I say, don't worry. I know exactly where your keys are. And you look at me and I say, they're in the white car. And then you think I'm a complete asshole, because that was totally worthless information, right? I told you something that's basically true. It's a parking lot full of white cars, but isn't actually giving you anything useful. So to handle this-- in the same way that in generation, we had this one parameter that we could tune to get, more or less, for both sentences. We're going to add only one parameter to question answering, which is kind of a truthfulness parameter. Which basically is going to say, this sentence, the person put an object on top of the red car in this video, is very ambiguous, right? It could either be Danny that did it or it could be me that put something on top of the red car. So what we're going to do is we're going to take this candidate's answer. We're going to run it over the video. And we're going to see how many times it has really close matches in the video. And depending on this one parameter, we're going to say you are allowed to say more things about the video to become more specific about what you're referring to. But potentially, slightly less true because the score will be lower. In the same way that you were saying slightly more in the generation case at the risk of saying potentially something that's slightly less true. So this way, you can ignore the sentence, which is unhelpful. And you can end up saying something like, the person on the left of the car put an object on top of the red car. So we can actually do that and the system produces that output. We built one recognition approach. And we did retrieval, generation, and question answering with it. We can also do disambiguation with it. In disambiguation, we take a sentence, like Danny approached the chair with a bag. And you can imagine that this sentence can mean multiple things. It could mean Danny was actually carrying a bag and approaching a chair. Or it could mean there was a bag on a chair and Danny was approaching it. And there's the question of, how do you decide which interpretation for the sentence corresponds to which video? Basically, you can take your sentences and you can look at their parse trees. And you're going to see that they're different. Essentially, your language system is going to give you a slightly different internal representation for each of these. And we already know that when we build our detectors for the sentence, we take these kinds of relationships between the words as inputs. So even though there's one sentence in English that described both of these scenarios, when we build detectors we're going to end up with two different detectors. One for one meaning, one for the other meaning. And then we can just run the detectors and figure out which meaning corresponds to which video. And indeed, that's what we did. Except that there are lots and lots of different potential ambiguities. There are different kinds of attachment. In the same case-- I won't go through all of them. But for example, you might not know where the bag is. You might not know who's performing the action. You might not be sure if both people are performing the action or only one person is performing the action. There may be some problems with references. So this is a very simple example, like Danny picked up the bag in the chair. It is yellow. But this is the kind of thing that you would see if you had a long paragraph. You would have some reference later on or earlier on to some person. And you wouldn't be sure who was the referent. And it turns out that if you have sentences like this, you can disambiguate them pretty reliably. So what's important is it's not just a case of parse trees. We need a more interesting internal representation. And an example of how we do this is we take a sentence and we make some first-order logic formula out of it. So you have some variables. The chair is something like x. You have Danny, who moved it, and I moved it. Or in the other case, you have two separate chairs. And I moved one and Danny moved the other. And they're distinct chairs. What we do is we first ignore the people. So we just say there are two people. And in both cases, we're distinct from each other. But we don't have person recognizers, face recognition, or anything like that. Then for each of these variables, we build a tracker. And for every constraint, we have a word model. And essentially, you can go from this first-order logic formula to one of our detectors. So it's exactly the same thing as the case where we had a sentence and a video. And we just wanted to see, is the sentence true of the video? Except that now we have a sentence interpretation and the video. So we've seen that if all you have are multiple interpretation of a sentence, you can figure out which one belongs to which video. And we'll come back to this in a moment, because it's actually quite useful. So you can imagine a scenario where you want to talk to a robot. And you want to give it a command. You don't want to play 20 questions with it, right? You want to tell it something. It should look at the environment. And it should figure out, you're referring to this chair and this is what I'm supposed to do. So the other reason for disambiguation is going to be because you get a lot of ambiguities while you're acquiring language. So we're going to break down language acquisition into two parts. One part is we want to learn the meanings of each of our words. And another one is we want to learn how we take a sentence and we transform it into this internal representation that we use to actually build these detectors. So if you look at the first one, let's say you have a whole bunch of videos. And every video comes with a sentence. You don't know what the sentence is actually referring to in the video. When children are born, nobody gives mothers bounding boxes and tells them, put this around the Teddy bear so your child knows what you're referring to. So we don't get those. We have this more weakly-supervised system. But what's important is we get this data set and there are certain correlations in this data set, right? We know the chair occurs in some videos. We know that backpack occurs in others just by looking at the sentence. We know pickup occurs in others. So basically, this is the same thing as training one, big hidden Markov model. Except that now we have multiple hidden Markov models that have a small amount of dependency between them. And I won't talk about this. You'll have to take my word for it. You can look at the paper. But it's identical to the Baum-Welch algorithm. Essentially, all you do is you take the gradient through all the parameters of these words and you can acquire their meanings. There are lots of technical issues with this, but that's the general idea. So we can also look at learning syntax. And this is something that we haven't done, but we really want to do. And this is where disambiguation work really comes into play. So if I give you a sentence, like Danny approached the chair with a bag, you feed it into a parser. Something like Boris' start system. And you get potentially two parse trees, right? One for one interpretation and one for the other interpretation. You take the video and you can select one of these parse trees. That's the game we just played a moment ago. But imagine that we take Boris' system and we brain damage it a little bit. Or we take some deep network that does parsing and we just randomize a few of the parameters. So now, rather than getting a single or two parse trees for our two interpretations, we get 100 or 1,000 different parse trees. We can take each one of those and we can see, how well does this match our video? And we get some distribution over them. Maybe we won't get a single one that matches the best. Maybe we'll get a few that match well and a bunch that match really, really poorly. So this provides a signal to actually train the parser. Essentially, you have a parser that produces a distribution over parse trees. You use the vision system to decide which of these parse trees are better than others. And you feed this information back into the parser and retrain it. We haven't done this, but it's in the pipeline. And eventually, the idea is that we're going to be able to close the loop and learn the meanings of the words while we end up learning the parser. But that's further down the line. So lest you think that there's something remarkable about language learning in humans, actually lots of animals learn language, not just humans. And here's a cute example of a dog that does something that our system can't do. And actually, no language system out there can do. So there's this paper, but this is from PBS. And what ended up happening is this dog knows the meaning of about 1,000 different words because there are labels that have been attached to different toys. So it has 1,000 different toys. Each one has a unique name. And if you tell the dog, give me Blinky, it knows exactly which toy Blinky is. And it has 100% accuracy getting you Blinky from it's big, big pile of toys. So what they did is they took 10 toys. They put them behind the sofa. And they added one additional toy that the dog has never seen before. They tested the dog many times to make sure that it doesn't have a novelty preference or anything like that. And then they asked the dog, bring the Blinky. And you can see the dog was asked. It goes behind. It quickly finds Blinky. It brings it back. And there we go. And now, the dog is really happy. So now, the dog is going to be asked, bring me this new toy. Bring me the professor, or whatever the toy is called. It's a little less certain. OK. So it's going to go behind and it's going to look at all the objects. The toy with the beard is the new one that it hasn't seen before. And it was there in the previous trial. So it looks around and it's a little uncertain. It doesn't quite want to come back. We're going to see that we're going to have to give it another instruction in a moment. He's going to call it back and ask the dog to do exactly the same task again. Isn't telling it anything new. It's just to give it some encouragement. So looking around for some toy. And it picks the-- you'll see in a moment. It picks the toy that it hasn't seen before, because it's a new word. And the dog is really happy. And I think the human is even happier that this actually worked. But the important part is, there's this dog that we normally don't associate with having a huge amount of linguistic ability. But it's learning language in a way that is far more advanced than anything that we have. And it's learning it in a grounded way, like it hard to connect its knowledge about what it sees with these toys to this new object that it's never seen before and understand this new label. And dogs are not the only animal that can do this. There are many other animals that can do this. All right. And of course, children do this as well. So there was a question about the fact that we're constantly using videos here. And we're very focused on motion. But of course, in many of these sentences, we were referring to objects that were static. So we're not only sensitive to objects that are moving. So for example, when I said something like it was the person to the left of the car, neither the person nor the car were moving in that question. It was the pair that was moving. But there's an interesting question, what if you want to recognize actions in still images? After all, we can do it. It probably didn't involve looking at photos. You know, 200 million years ago when our visual system was being formed. So somehow, we take our video ability and we apply it to images. And the way we're going to do that is by taking an image and predicting a video from it. We haven't done this, but we've done the part where you can actually get predicting motion from single frames. So the intuition about why this works is, if you look at this image and I ask you, how quickly is this baseball moving? You can give me an answer. AUDIENCE: Not very quickly. ANDREI BARBU: Not very quickly. Right. And if you look at this baseball, you can decide that it's moving very quickly, right? So the other story in this talk is I'm becoming more and more American. I started with the Canadian flag and now I ended up with baseball. All right. So you can clearly do this task. There is good neuroscience evidence that people are doing this fairly regularly. Kids can do this, et cetera. All right. So now, what we did is we went to YouTube and we got a whole bunch of videos. Videos that contain cars or different kinds of objects. We had eight different object classes. And we ran a standard optical flow algorithm just off the shelf. And this gives us an idea of how the motion actually happens inside this video. Then, we discard the video. And we only keep one of the frames. And we train a deep network. This is the only time deep networks appear in this talk. That takes as input the image and predicts the optical flow. It looks a lot like an auto-encoder, except the input and the output are different from each other. And it turns out this works pretty well. It has similar performance to actually doing optical flow on the video with sort of a crappier, earlier optical flow algorithm. So up until now are things that we've done. At the end I'll talk briefly about what we're doing in the future. So one thing that you can do is translation. And you can cast translation as a visual language task, even though it sounds like it has nothing to do with vision. So if I give you a sentence in Chinese, you can imagine scenarios for that sentence, and then try to describe them with another language that you know. This is very different from the way people do translation right now. So right now, the way it works is you have a sentence, like Sam was happy. And you have a parallel corpus. If you want to translate into French, you go off you. Get the Hansard corpus and you get a whole bunch of French and English sentences that are aligned with each other and you learn the correspondence between them. Here, I translated into [AUDIO OUT] Russian. The important part is in English, there's no assumption about the gender of Sam. Sam is both a male name and a female name. But the problem is Romanian, Russian, French, et cetera, they really force you to specify the gender of the people that are involved in these actions. And you have to go through a certain amount of [AUDIO OUT] really want to avoid specifying their gender. So here, we specify the gender as male. Here, we specify the gender as female. And if all you have is statistical machine translation system, you may get an arbitrary one of these two. And you may not know that you've got an arbitrary one of these two. And there may be a terrible faux pas at some point. So this problem is not restricted to gender. And it occurs all the time. For example, in Thai, you specify your siblings by age, not by their gender. So if you have an English sentence like my brother did x, translating that is quite difficult. In English, you specify relative time through the tense system, but Mandarin doesn't have the same kind of tense system. In this language that I never tried to pronounce after the first time that I tried, you don't use relative direction. So you don't say the bottle to the left of the laptop. We all agree on a common reference frame like a hill or something. Or we agree on cardinal directions. And you say, the bottle to the north or something. And these people are really, really good at wayfinding because they constantly have to know where north is. Many languages don't distinguish blue and green. Historically, this is not something that languages have done. It's pretty new. For example, Japanese didn't until a hundred years ago. They only started distinguishing the two fairly recently when they started interacting with the West more. And many languages don't set that boundary at exactly the same place. So one language, you may say blue. In another language, you may have to say green. In Swahili, you specify the color of everything as the color of x. So like in English, we have orange. But in Swahili, I could say the color of the back of my cell phone. And I expect you to know that's blue as long as you can see it. In Turkish, there is a relatively complicated evidentiality system. So you have to fairly often tell me why you know something. So if you saw somebody do something, you have to mark that in the sentence as opposed to hearing it from someone else. So if it's hearsay, you have to let me know. There are much more complicated evidentiality systems where you have to tell me, did you hear it, did you see it, did you feel it? It can get pretty hairy. So there are a lot of reasons why just doing the straightforward sentence alignment can really fail on you. And you can make some pretty terrible mistakes. And more importantly, you just won't know that that made these mistakes. So instead, what we've been thinking is sort of translation by imagination. So you take a sentence. And it's a generative model that we have that connects sentences and videos. And what you do is you sample. You sample a whole bunch of videos. So basically, you imagine what scenarios the sentence could be true of. You get your collection from the generator. You search over sentences that describe these videos and you output a sentence that describes them well in aggregate. So basically, you just combine your ability to sample, which comes from your recognizer, and your ability to generate. And you get a translation system. So you do a language-to-language task mediated by your understanding of the real world. Something else that you can do is planning, which I'll just say two words about. All you do is-- [PHONE RINGING] --in a planning task, what you have is you have a planning language. I'm glad that wasn't my cellphone. You have a planning language, right? So you have a fairly constrained vocabulary that you can use to describe your plans. And this allows you to have efficient inference. Instead, you can imagine that I have two frames of a video, real or imagined, where I have the first world. I am far away from the microphone. I have the last world where I'm near the microphone. And I have an unobserved video between the two. People have work and I've done some work on filling in partially observed videos. So it's a very similar idea, except that here we have a partially-observed video. And we know that this partially-observed video should be described by one or more sentences. So we're going to do the same kind of sampling process, where we sample from this partially-observed video and we try to describe what the sentence is. And now you're doing planning. You're coming up with a description of what had happened in this missing chunk of the video. But your planning language is English, so you get to take advantage of things like ambiguity, which you couldn't take advantage of in many languages. Theory of mind. The idea here is relatively straightforward as well. So what we have right now, basically are two hidden Markov models. Or two kinds of hidden Markov models, right? There's a video. We have some hidden Markov models that are tracks and we have some hidden Markov model that look at the tracks and they do some inference about what's going on with the events in these videos. So now imagine that I had a third kind. A third kind of hidden Markov model that only looks at the trackers. Doesn't look at the words directly. And what it does is it makes another assumption about the videos. So first, we assume the objects were moving in a coherent way. Then, we assumed that the objects were moving according to the dynamics of some hidden Markov models. Now, we're going to assume that people move according to some dynamics of what's going on inside our heads. So you can assume that I have a planner inside my head that tells me what I want to do and what I should do in the future to accomplish my goals. And you can look at a sequence of my actions and try to infer. If you believe this planner is running in my head, what do you think I should do next? Now, the nice part about many of these planners is that they look a lot like this hidden Markov models. And the inference algorithms look a lot like these models. So basically, you can do the same kind of trick by assuming that HMM-like things are going on inside people's heads. So you can do things like predict actions, figure out what people want to do in the future, what they did in the past. That's what the project is. I want to show you another example of vision and language, but in a totally different domain that I won't talk about, which is in the case of robots. This is something that we built several years ago. This is a robot that looks at a 3D structure. It's built out of Lincoln Logs. They're big. They're easier for the robot to manipulate than LEGOs. The downside is they're all brown, so it's very difficult to do vision on this. But it actually will, in a moment, reconstruct the 3D structure of what it sees. And we annotated and read what errors it made. We didn't tell it this. What it does is it measures its own confidence and it figures out what parts are occluded. So it has too little information. And it plans another view. It goes, it acquires it by measuring its own confidence. This view is actually worse than the previous view, but it's complementary. So it will actually gain the information that it's missing. And all of this comes from the same kind of generative model trick that I showed you a moment ago. A similar model, it just makes different assumptions about what's built into it. So now, because we have a nice generative model, we can integrate the two views together. You're going to see in a moment. It'll still make some mistakes. It won't be completely confident because there are some regions that it can't see, even from both views. And then what we told it is, OK, fine. For now, ignore the second view, take just the first few. Here's a sentence. Or in this case, a sentence fragment. The fragment is something like, there's a window to the left and perpendicular to this door. It'll just appear a moment. And integrating this one view that it saw that it was uncertain about with that one sentence, that's also very generic and applied to many structures, determine that these two completely disambiguate the structure. And now, it's perfectly confident in what's going on. And it can go and it can disassemble the structure for you. And we can play this game in many directions. We can have the robot describe structures to us. We can give it a description and it can build the structure. One robot can describe the structure in English to another robot who can build it for it. And it's exactly the same kind of idea. You connect your vision and your language model to something in the real world, and then you can play many, many different tricks with one internal representation without modifying it at all. But I realized yesterday that I was the last speaker before the weekend, so I want to end by leaving you as depressed as I possibly can, and tell you all the wonderful things that don't work. And how far away we are from understanding anything. So first of all, we can't generate the kind of coherent stories that Patrick looks at. Really, if you look at a long video, what we can do is we can search or we can describe small events. A person picks something up. They put it down. What we can say is the thief entered the room and rummaged around and ran away with the gold. That's the kind of thing that you want to generate. It's the kind of thing that kids generate, but we're not there yet. Not even close. We also only reason in 2D. There's no 3D reasoning here. And that significantly hurts us. Although, we have some ideas for how we might do 3D. Another important aspect is we don't know forces and contact relationships. Now, that's fine as long as pickup means this kind of action where you see me standing next to an object and the object moving up. But sometimes, pickup means something totally different. So you're going to see this cat is going to pickup that kitten in just a moment. And you're going to see if you pay attention to the motion of the cat, that it doesn't look like it's picking something up. It's not very good at picking up the kitten, mind you. I think this may be its first try. I think it's having a good day. It's OK. Struggling a little bit. But see? So definitely, picked it up. But it didn't look anything like any of the other pickup examples I showed you. But conceptually, you should totally recognize this if you've seen those other examples. And kids can do this. So the important part is you have to change how you reason you can't just reason about the relative motions of the objects. You have to assume that there are some hidden forces going on. And you have to reason about the contact relationships and the forces that the objects are undergoing. What happens if you try to recognize a helicopter picking something up. It looks totally different from a human doing it, but no one has any problems recognizing this. Segmentation is also a huge problem. For many of these problems, you have to pay attention to the fine boundaries of the objects in order to understand that that kitten was being rotated and then slightly lifted. There's also a more philosophical problem about what is a part and what it means for something to be an object. We arbitrarily say that the cat is an object, but I could refer to its paws. I could refer to its ears. I could refer to one small patch on its back. As long as we all know what we're talking about, that can be our object. And that's a problem throughout computer vision. It also occurs in a totally different problem. So if you've ever seen Bongard problems, there are these problems where you have these weird patches, and you have to figure out what's in common between them. And that's the case where you have to dig deep into your visual system to extract a completely different kind of information. And this is an example that I prefer. So in this task, you can try to find the real dog. And we can all spot it after you look for a little while. Right? Does everyone see it? OK. So you can all see it The interesting part is-- I mean, I doubt you have ever had training detecting real dogs amongst masses of fake dogs. But somehow, you were able to adapt and extract a completely different kind of information from your visual system. Information that isn't captured by our feature vector, as I talk about the color, location, velocity, et cetera. So you have this ability to extract out task-specific information. You can do things like theory of mind, but you can do far more than assume people are writing a planner. You can detect if I'm sad. If I'm happy. You can reason about whether two people are having a particular kind of interaction. Who's more powerful than the other person. You also have a very strong physics model inside your head that underlies much of this. And even more than that, there's the concept of modification. So walking quickly looks very different from running quickly. And the way you model these is quite complicated. And the system that I presented doesn't do a good job of it. But one of my favorite examples from my childhood long ago is this one, which is a kind of modification. So coyote is going to draw this. You're going to see the Roadrunner try to run through it. And he makes it. And you can imagine what's about to happen next. Coyote is not going to have a good day. So this looks silly, right? And you would think to yourself, how could we possibly apply this to the real world? But actually, this happens in the real world all the time. A cage can be open for a mouse, but closed for an elephant. So if you're going to represent something like, is something closed or not, you have to be able to handle situations like this. And that's why kids can understand really weird scenarios like this because they're not so outlandish. There's also the problem of the vast majority of English verbs-- things like absolve, admire, anger, approve, bark, et cetera. All of them require far more knowledge. They require many of the things I've talked about before. And actually, far more than them. And what's even worse is we also use language in pretty bizarre ways. So there are some kinds of idioms in English, like the market [AUDIO OUT] bullish, that you have to have seen before to understand, right? There's no reason to assume that bears are better or worse than bulls when you apply them to the stock market. On the other hand, there are certain things that are very systematic. I can have an up day or a down day, because we've both kind of as a culture agreed that up is good and down is bad. Some cultures have made the opposite choice. But usually, it's up is good, down is bad. So an idea can be grand or it can be small. Because we've decided big things are better than small things. Someone's mood can be dark or light. And these are very systematic variations that underlie all of language. And we constantly use metaphoric extension in order to describe what's going on around us and to talk about abstract things. It really seems as if this is kind of built-in to our model of the world. And modeling this is kind of over the horizon. And there are many, many, many other things that we're missing here. So I just want to thank all my wonderful collaborators, like Boris, Max, Candace, and people at MIT, and people elsewhere. But to recap, what we saw is that we can get a little bit of traction on these problems. We can build one system that does one simple problem just connects our perception with our high-level knowledge, takes a video and a sentence and gives us a score. And once we have this interesting connection, this interesting feedback between these two very different-looking systems, it turns out that we can do many different and sometimes surprising things. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Danny_Jeck_Impact_of_Attention_on_Cortical_Models_of_Visual_Recognition.txt | [MUSIC PLAYS] DANNY JECK: Hi, I'm Danny Jeck. I'm a fifth year grad student getting my PhD at Johns Hopkins in biomedical engineering. My project, what I'm trying to look into is how attention is related to models of visual cortex. The first lecture was by Jim DiCarlo. He gave a talk about how the models of object recognition seem to match pretty well with the behavior of inferotemporal cortex in macaques. We know that also in macaques earlier areas of visual processing are modulated by attention. So the question is, well, OK. We have this model, let's say we add some modulation due to attention, what does that do downstream as that information propagates through the network? I'm building a model in Python right now. And it's running. The main goal of the model is to see how some modulation in earlier cortex would propagate through a model like what we believe is happening in the brain already. A boring finding would be that a 10% modulation results in a 10% modulation downstream. I'm expecting that that's not the case, because there's a whole bunch of nonlinearities and normalization that happens that should propagate through this network. The question is what is the magnitude of that, how does that affect things if the 10% modulation is not actually the right number because of some measurements or the way I'm interpreting the measurements that have been made already, what would different numbers allow for. Or perhaps the modulations we found downstream are all due to other feedback from other areas rather than this going back to the beginning and propagating all the way through. So the idea came about from Ethan Meyers. He was originally interested in trying to do this kind of two passes through a network, one in which you sort of try and figure out the location of an object, and another in which you try to recognize it. I kind of took that in a different direction because I was more interested in the neurophysiology side of things. In my current lab, I wouldn't have had time to do something like this because I wasn't planning on investing a lot of time understanding what deep networks were. So really, having the time to sort of work on a free project has been really nice. [MUSIC PLAYS] |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Tutorial_32_Lorenzo_Rosasco_Machine_Learning_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORENZO ROSASCO: So what we want to do now is to move away from local methods and start to do some form of global regularization method. The word regularization I'm going to use broadly as a term to define procedures, statistical procedures and computational procedure that do have some parameters that allow to do from complex model to simple model in a very broad sense. What I mean by complex is something that is potentially going closer to overfitting and by simple model something that is giving me something, which is stable with respect to data. So we're going to consider the following algorithm. I imagine a lot of you have seen it before. This is called-- it has a bunch of different names-- probably the most famous one is Tikhonov regularization. A bunch of people at the beginning of the '60s thought about something similar either in the context of statistics or solving linear equations. So Tikhonov is the only one for which I can find the picture. The other one was Philips, and then there is Hoerl and other people. They basically thought all about this same procedure. The procedure basically is based on a functional that you want to minimize based on two terms. So there are several ingredients going on here. First of all, this is f of x. We assume the functional form of-- we try to estimate the function, and we do assume a parametric form of this function, which in this case, just linear. And for the time being, because you can really, you can put it back in, I don't look at the offset. So I just take lines passing through the origin. And this is just because you can prove in one line that you can put back in the offset at zero cost. So for the time being, just think that data are actually standard. The way you try to estimate this parameter is, on one hand, try to make the empirical error small, and on the other hand, you put a budget on the weights. The reason why you do this-- there are a bunch of way to explain this. Andrei yesterday talked about margin, and different lines, and so on. Another way to think about it is that you can convince yourself-- and we were going to see later-- that if you're in low dimension, a line is a very poor model. Because basically if you have more than a few points-- and they're not standing on the line-- you will not be able to make zero error. But if the number of points is lower than the number of dimension, you can show that the line actually can give you zero error. It's just a matter of degrees of freedom. You have fewer equations than the actual variables. So what you do is that you actually add a regularization theorem. It's basically a theorem that makes the problem well-posed. We're going to see this in a minute from a different perspective. The easiest one is going to be numerical. We stick to least squares for-- and there is-- so there is an extra parenthesis that I forgot, but before I tell you why we use least squares also let me tell you that-- as somebody pointed out-- there is a mistake here, because this is a minus. It should just be a minus. I'll fix this. So back, why do you use least squares? OK, so least squares on the one hand, if you're in low dimension especially, you can think of least squares as its way is basic, but it's not a very robust way to measure error, because you squared them. And so just one error can count a lot. So typically, there is a whole literature on robust statistics, where you want to replace least square with something like an absolute value or something like that. It turns out that at least in our experience and when you have high dimensional problem, it's not completely clear how much this kind of instability will occur and will not be cured by just adding some regularization term. And the computation underlying this algorithm are extremely, extremely simple. So that's why we're sticking to this, because it works pretty well in practice. We actually developed in the last few years some toolbox that you can use. They're pretty much plug and play. And because the algorithm is easy to understand in simpler terms. Yesterday, Andrei was talking about SVM. SVM is very similar in principle. Basically the only difference is that you change the way you measure cost here. This algorithm you can use both for classification and regression, whereas SVM-- the one which was talked about yesterday-- is just for classification. And because the cost function turns out to be non-smooth-- and non-smooth is basically non-differentiable-- and so the whole math is much more complicated, because you have to learn how to minimize things that are not differentiable. So in this case, you can stick to elementary stuff. And I think I did somewhere, that also because Legendre 200 years ago said that least squares are really great. There is this old story-- who between Gauss and Legendre invented least squares first. And there are actually long articles about this. But anyway, it's around that time. It's around the end of the-- this is when he was born-- it's around the end of the 18th century. So the algorithm is pretty old. So what's the idea? So back to the case we had before, you're going to take a linear function. So one thing is-- just to be careful-- think about it once. Because if you've never thought about it before, it's good to focus. When you do this drawing, this is not f of x. This line is not f of x. It's f of x equals zero. So I think I made enough time to have a 3D plot. So f of x is actually a plane that cuts through the slide. It's positive, when it's not dotted-- because this points are positive-- and then becomes negative. And this line is where it changes sign. So the decision boundary is not f of x itself, but it's the level set that corresponds to f of x equals zero. Whereas f of x itself is this one line. If you think in one dimension, the points are just standing on a line. Some here are plus 1. Some here are minus 1. So what is f of x? It's just a line. What is the decision boundary in this case? It will just be one point in this case actually, because it's just one line that cuts the input line in one point. And that's it. If you were to take a more complicated nonlinear line, it would be more than one point. In two dimension, it becomes one line. In three dimension, it becomes a plane, and so on and so forth. But the important piece-- just a remember, at least once-- then we look at this plot. This is not f of x, but only the set f of x equals zero, which is where you change sign. And that's how you're going to make prediction. You take real valued functions, so you would like-- in principle, in classification, you would allow this function just to be binary. But optimization with binary functions is very hard. So what you typically do to relax this? You just allow it to be a real valued function, and then you take a sign. When it's positive, you take plus 1. If it's negative, you say minus 1. If it's a regression problem, you just keep it for what it is. And how many free parameters has this algorithm? Well, one. It's lambda for now and w. But w we're going to solve by solving this optimization problem. How about lambda? Well, whatever we discussed before for k. We would try to sit down and do some bias variance of the composition, see what it depends on, try to see if we can get a grasp on what the theory of this algorithm is. And then we try to see if we can use cross-validation. You can do all these things, so we're not going to discuss much how you choose lambda, but most of you are going to discuss how you can compute the minimizer of this. And this is not a problem, because this is smooth. So you can take the retrospect to w and also this. So what you can do is just to take the derivative of this, set it equal to zero, and check what happens. So it's useful to do this to other-- just some vectorial notation. We've already seen it before. So you take all the x's and you stack it as rows of the data matrix x of n. So this ny, you just stack them as entries of a vector. You call it yn. Then you can rewrite this term just in this way, as this vector minus this vector here, which you obtain by multiplying the matrix with w. So this norm is the norm in Rn. So this is just simple rewriting. It's useful just because if you now take the derivative of this with respect to w, set it equal to zero, you get this. This is the gradient. So I haven't set it to zero yet. This is the gradient of the least square part. This is the gradient of the second term. It is still multiplied by lambda. If you set them equal to zero, what you get is this. You take everything with x, so the 2 and the 2 goes away. You took everything with x, and you put it here. There's still the one here with lambda. You put it here. You take this term in x transpose y, and you put it on the other side of the equality. So you take everything with w on one side and everything without w on the other side. And then here, I remove n by multiplying. And so what you get is a linear system. It's just a linear system. So that's the beauty of least squares. Whether you regularize it or not-- in this case for this simple squared loss regularization, all you get is a linear system. And this is the first way to think about the effect of adding this term. So what is this doing? So just quickly for you a quick linear system recap. You're solving a linear system. I changed notation. This is just a parenthesis, just a little bit. The simplest case you can think of is the case where m is diagonal. Suppose it's just a diagonal matrix, a square diagonal matrix. How do you solve this problem? You have to invert the matrix m. What is the inverse of a diagonal matrix? So it's just another diagonal matrix. On the entries, instead of, say, sigma, you have 1 over sigma or whatever it is. So what you see is that if m-- you just consider m-- and m is diagonal like this-- this is what you're going to get. Suppose that now some of these numbers are actually small, then when you take 1 over, this is going to blow up. When you apply this matrix to b, what you might have is that if you change the sigmas or the b slightly, you can have an explosion. And if you want, this is one way to understand why adding the lambda would help. And it's another way to look at overfitting, if you want, from a numerical point of view. You take the data. You change them slightly, and you have numerical instability right away. What is the effect of adding this term? Well, what you see is that instead of just doing m minus 1, you're doing m plus lambda I minus 1. And this is the simple case, where it's diagonal. But what you see is that on the diagonal instead of 1 over sigma 1, you take 1 over sigma 1 plus lambda. If sigma 1 is big, adding this lambda won't matter. If sigma-- for example, sigma d, now think there are order. I'm thinking they are order, and sigma d is small. If this is small, at some point lambda is going to jump in, make the problem stable at the price of ignoring the information in that sigma, that you basically consider it to be at the same size of the noise or the perturbation or the sample in your data. Does this make sense? So this is what the algorithm is doing. And it's a numerical way to look at stability. But you can imagine that this is an immediate statistical consequence. You change the data slightly, you'll have a big change in your solution and the other way around. And lambda governs this by basically telling you how much this is invertible. So it's a connection between statistical and numerical stability. Now of course, you can say, this is oversimplistic, because this is just a diagonal matrix. But basically, if you now take matrices that you can diagonalize, conceptually nothing would change. Because basically you would have that if you have a matrix-- so there is a mistake here. There should be no minus 1. If you have an m that you can-- this is just sigma, not minus 1. You can just diagonalize it. And now every operation you want to do on the matrix you can just do on the diagonal. So all the reasoning here will work the same. Only now you have to remember that you have to squeeze the diagonal matrix in between v and v transpose. I'm not saying that this is what you want to do numerically. But I'm just saying that the conceptual reasoning here-- that we tell it that this was the effect of lambda-- is going to hold just the same here. This is m, which you can write like this-- m minus 1 you can write like this. And so this is just going to be the same diagonal terms inverted. And now you see the effect of lambda. It's just the same. So once you grasp this conceptually, for any matrix you can make diagonal, it's the same. And the point is that as long as you have a symmetric positive definite matrix, the reason you can diagonalize it, you just have the same thing squeezed in between v and v transpose. And that's what we have, because instead of-- because what we have is exactly this matrix here. So instead of-- and you see here that basically this depends a lot on the dimensionality of the data. If the number of points is much bigger than the dimensionality, this matrix in principle could be-- it's easier that is invertible. But if the number of points is smaller than the dimensionality-- how big is this matrix? So xn is-- you remember how big was xn? It was the rows were the points, and the columns were the variable. So how big is this? And we call this d. We called the length n. So this is-- AUDIENCE: [INAUDIBLE] LORENZO ROSASCO: --n by d. So this matrix here is how big? Just d by d, and the number of points is smaller than the number dimension. The rank of this-- this is going to be rank-deficient. So it's not invertible. So if the number of points is more, if you're in a high-- so called high-dimensional scenario, where the number of points is more than the number of dimension, for sure you won't be able to invert this. Ordinary least squares will not work. It will be unstable. And then you will have to regularize to get anything reasonable. So in the case of least squares, just by setting rank to zero and looking in this computation to get a grasp of both. What kind of computation you have to do, and what they mean both from the statistical and the numerical point of view. And that's why that's one of the beauty of least squares. We could stick to a whole derivation of this-- so this is more the linear system perspective. There is a whole literature trying to justify more from a statistical point of view what I'm saying. You can talk about the maximum likelihood, then you can talk about maximum a posteriori. You can talk about variance reduction and so-called Stein effect. And you can make a much bigger story trying, for example, to develop the whole theory of shrinkage estimators, the bias variance tradeoff of this. But we're not going to talk about that. So this simple numerical stability, statistical stability intuition is going to be my main motivation for considering these schemes. So let me skip these. I wanted to show the demo, but-- it's very simple. It's going to be very stable, because you're just drawing a one-dimensional line. Then you move on just a bit, because we didn't cover as much as I want in the first part. So first of all, so far so good? Are you all with me about this? So again, the basic thing if you want-- all the interesting-- so this is the one line, where there is something conceptual happening. This is the one line, where we make it a bit more complicated mathematically. And then all you have to do is to match this with what we just wrote before. That's all. These are the main three things we want to do. And think a bit about dimensionality. Now if you look at a problem even like this, as I said, this might be misleading-- a low dimension. And in fact, what we typically do in high dimension is that, first of all, you start with the linear model and you see how far you can go with that. And typically, you go a bit further that you might imagine. But still, you can think, why should I just stick to linear decision rule? This won't give me much of a flexibility. So in this case, obviously, it looks like something that would be better, some kind of quadric decision boundary. So how can you do this? How can you go-- suppose that I give you the code of least squares. And you're the laziest programmer in the world, which in my case is actually not that hard to imagine. How can you recycle the code to fit, to create a solution like this, instead of a solution like this? You see the question? I give you the code to solve this problem, the one I showed you before-- the linear system for different lambdas. But you want to go from this solution to the solution. How could you do that? So one way you can do it in this simple case is-- this is the example. So the idea is-- you remember the matrix? I'm going to invent new entries of the matrix, not of the points, because you cannot invent points, but of the variables. So what you're going to do, instead of just-- they can say, in this case I call them x1, x2. I'm just in two dimension. These are my data. This is just another example of this. So these are my data-- sorry these are-- let's see what they are. This is one point. X1 and x2 here are just the entry of the point x, so the first coordinate and the second coordinate. So what you said is exactly one way to do this. And it is-- I'm going to now build a new vector representation of the same points. So it's going to be the same point, but instead of two coordinates I now use three, which are going to be the first coordinate square, the second coordinate square, and the product of the two coordinates. Once I've done this, I forget about how I got this, and I just treat it as new variables. And I take a linear model with that variables. It's a linear model with these new variables, but it's a new linear model with the original variables. And that's what you see here. So x tilde is this stuff. It's just a new vector representation. And now I'm linear with respect to this new vector representation. But when you write x tilde explicitly, it's some kind of non-linear function of the original variable. So this function here is non-linear in the original variable. It's harder to say than probably to see. Does it make sense? So if you do this, you're completely recycling the beauty of the linearity from a computational point of view while augmenting the power of your model from linear to non-linear. It's still parametric in the sense that in this case-- what I mean by parametric is that we still fix a priori the number of degrees of freedom of our problem. It was true now I make it three. More general I could make it p, but the number of numbers I have to find is fixed a priori. It doesn't depend on my data, and it's fixed. But I can definitely go from linear to non-linear. So let's keep on going. So from the simple linear model we already went quite far, because we basically know that with the same computation we can now solve stuff like this. Let's take a couple of steps further. So one is-- appreciate that really the code is just the same. Instead of x, I have to do a pre-processing to replace x with this new matrix x tilde, which is the one which instead of being n by d, is now n by p where p is this new number of variables that I invented. Now it's useful to just get the feeling of what is the complexity of this method. And this is a very quick complexity recap. Here basically, the product of two numbers is going to count one. And then when you take product of vectors of matrices, you just count on any real number multiplication you do. And this is a quick recap. If I multiply two vectors of size p, the cost p, matrix vector is going to be np. Matrix matrix is going to be n square p. You have n vectors. And one-to-one, other n vectors. And they are size p, so each time you have-- it costs you p. And you have to do n against n. So it's going to be n square p. And the last one is-- this is a much-- less clear to just look at it like this. But roughly speaking, the inversion of a matrix costs roughly speaking n cube in the worst case. It's just to give you a feeling of what the complexity are. So it makes sense? It's a bit quick, but it's simple. If you know it, OK. Otherwise, you just take this on the side, when you think about this. So what is the complexity of this? Well, the matrix-- you have to multiply this times this, and this is going to cost you nd or np. You have to build this matrix. This is going to cost you n square d or n square p. And then you have to invert. These are going to be n cube. So-- sorry, p cubed, because with this matrix is going to be-- or d cube, because this matrix is d by d. So this is, roughly speaking, the cost. So now look at this. This is-- I take this. In this case, p is the new variable, otherwise d. So in this case, I have p cube, and then I have p square n. But one question is what if n is much-- and that's a fact-- what if n is much smaller than p? If n is a 10, do I really have to pay quadratic or even cubic in the number of dimension to solve this problem? Because in some sense, it looks I'm overshooting things a bit. Because I'm inverting a matrix, yes, but this matrix is really a rank n. It only has n rows that are linearly independent at most. It might be less, but at most it has n. So can I break the complexity of this? Linear system have to solve, you just use the table I showed you before. Check the computation. These are the computation you have to do. And one observation here is you pay really a lot in the dimension, the number of variables or the number of features you invented. And this might be OK, when p is smaller than n. But one thing-- this seems wrong intuitively, when n is much smaller than p. Because the complexity of the problem, the rank of the problem is just n. The matrix here has n rows and d or p columns depending on which representation you take. And so the rank of the whole thing is at most n, if n is much smaller. So now the red dot appears. And what you can do is proving this one line. So let's see what they do, and then I'll tell you how you can prove it. And it's an exercise. So you see here if you invert this, then you have to multiply x transpose y times the inverse of this matrix, which is what's written in here. So I claim that this equality stands. Look what it does. I take this x transpose. I move it in front. But then if I do this, you clearly see that I'm messing around with dimensions. So what you do is that you have to switch the order of the two matrices in the middle. Now from a dimensionality point of view, at least, I still see that this matrix and this matrix have the same dimension. How do you prove this? Well, you basically just need to do SVD. You take the singular-value decomposition of the matrix Xn. You plug it in, and you just compute things. And you check that this side of the equality is the same of this side of the equality. So there's nothing more than this, but we're going to skip this. So you just take this as a fact. It's a little trick. Why do I want to do this trick? Because look, now what I say is that my w is going to be x transpose of something. What is this something? So w is going to be X transpose of this thing here. How big is this vector? So how big is this matrix first of all? So remember, Xn was how big? AUDIENCE: N by d. LORENZO ROSASCO: N by d or p. How big is this? AUDIENCE: N by n. LORENZO ROSASCO: N by n. So how big is this vector? It's n by 1. So now I have to-- I found out that my w can always be written as x transpose c, where c is just an n-dimensional vector. I rewrote it like this, if you want. So what is the cost of doing this? Well, this was the cost of doing this? But now you just have to do-- so let's say what is the cost of doing this thing here above the bracket? Well, if this one was p cube p square n, this one will be how much? I have that this matrix will say p by p, and then this vector was p by 1. Whereas here, my matrix is n by n, and the victory is n by 1. So you basically have that these two numbers swap. Instead of having this complexity, now you have a complexity, which is n cube. And then you have n square p, which sounds about right. It's linear in p. You cannot avoid that. You have to look at the data at least once. But then it's polynomial only in the small quantity of the two. So in some sense, what you see is that, depending on the size of n, of course, you still have to do this multiplication. But this multiplication is just n, nd, or np. So let's just recap what I'm telling you. This is a lot more mathematical fact I put. I have a warning here. The first thing is the question should be clear. Can I break the complexity of this in the case when n is smaller than p or d? This is relevant because the question came out a second ago, which was should I always explode the dimension of my features? And here what you see is that-- well, at least for now we see that even if you do, you don't pay more than linearly in that. And the way you prove it is A, you observe this factor, which, again, I measured if you're curious, to show how you do it, but it's a one line. And 2, you observe that once you have this, if you just rewrite w, you can write w as a x transpose c. And to find a c-- which is now you basically re-parametrize-- and to find the new c is going to cost you only n cube n square p. So you do exactly what you wanted to do. And basically, what you see now is that whenever you do least squares, you can check the number of dimensions, the number of points, and always re-parametrize the problem in such a way that complexity is depending linearly on the bigger of the two and polynomially on the smaller of the two. So that's good news. Oh, I wrote it. So this is where we are right now. So if we're lost now, you're going to become completely lost in one second. Because this is what we want to do. We want to introduce kernel in the simplest possible way, which is the following. So look at-- this is what we find out. We discovered, we actually proved a theorem. And the theorem says that the w's that are output by the least squares algorithm are not any possible d-dimensional vectors, but they're always vectors that I can write as the combination of the training set vectors. So xi is long d or p, and I've summed them up with these weights. And the w's that are going to come out of least squares are always of that form. They cannot be of any other form. This is called the representer theorem. It's the basic theorem of so-called kernel methods. It shows you that the solution you're looking for can be written as a linear superposition of these terms. If you now write-- this is just the w. Let's just write down f of x. F of x is going to be x transpose w, just the linear function. And now you can-- if you write it down, you just get this. By linearity you can-- so w is written like this. You multiply by x transpose. This is a finite sum. So you can let x transpose inside the sum. This is what you get. Are you OK? So you have x transpose times a sum. This is the sum of x transpose multiplied by the rest. Why do we care about this? Because basically the idea of kernel methods-- in this very basic form-- is what if I replace this inner product, which is a way to measure similarity between my functions, with another similarity. So instead of mapping each x into a very high dimensional vector and then taking product-- which is itself, if you want another way, as I said, of measuring similarity in your product, distances between vectors-- what if I just define it, instead of by an explicit mapping, by redefining the inner product. So this k here is the k similar to the one we had in the previous-- in the very first slide. And it's-- re-parametrize the inner product. Change the inner product, and then I want to use everything else. So we need to question-- we need to answer two question. The first one is if I give you now a procedure that whenever you would want to do x transpose x does something else called ax comma x prime. How do you change the computations? This is going to be very easy. But also what are you doing from a modeling perspective? So from the computational point of view, it's very easy, because you see here you always had that you have to build a matrix whose entries were xi transpose xj. So it was always a product of two vectors. And what you do now is that you do the same. So you build the matrix kn, which is not just xn, xn transpose but is a new matrix whose entries are just this. This is just a generalization. If I put the linear kernel, I just get back in what we had before. If you put another kernel, you just get something else. So from a computational point of view, you're done for this computation of c. You have to do nothing else. You just replace this matrix with these general matrix. And if you want to now compute s-- so w you cannot compute anymore, because you don't know what's an x by itself. But if you want to compute f of x, you can, because you've just to plug-in-- So you know how to compute the c. And you know how to compute this quantity, because you have just to put the kernel there. So the magic here is that you never ever point x in isolation. You always have a point x multiplied by another point x. And this allows you to replace vectors by-- in some sense, this is an implicit remapping of the points by just redefining the inner product. So what you should see for now is just that the computation that you've done to compute f of x in the linear case you can redo, if you replace the inner product with this new function. Because A, you can compute c by just using this new matrix in place of this. And B, you can replace f of x, because all you need is to replace this inner product with this one and put the right weights, which you know how to compute. From a modeling perspective what you can check is that, for example, if you choose here this polynomial kernel-- which is just x transpose x prime plus 1 elevated to the d-- if you take, for example, d equal 2, this is equivalent to the mapping I showed you before, the one with explicit monomials as entries. This is just doing it implicitly. If you're in low-dimensional, if you're low-dimensional, if n is very big, and the dimensions are very small, the first way might be better. But if n is much bigger, this way would be better. But also you can use stuff like this, like a Gaussian kernel. And in that case, you cannot really write down explicitly the explicit map, because it turns out that it's infinite-dimensional. The vectors you would need to write down, to write down the explicit variable version of-- embedding version of this is infinite-dimensional. So this is a-- if you use this, you get the truly non-parametric model. If you think of what is the effect of using this, it's quite clear if you plug them here. Because what you have is that in one case you have a superposition of linear stuff, a superposition of polynomial stuff, or a superposition of Gaussians. So same game as before. So same dataset we train. I take kernel least squares-- which is what I just showed you-- compute the c inverting that matrix, use the Gaussian kernel-- the last of the example-- and then compute f of x. And then we just want to plot it. So this is the solution. The algorithm depends on two parameters. What are they? AUDIENCE: Lambda. LORENZO ROSASCO: Lambda, the regularization parameter, the one that appeared already in the linear case-- and then-- AUDIENCE: Whatever parameter you've chosen [INAUDIBLE].. LORENZO ROSASCO: Exactly. Whatever parameters there is in your kernel. In this case, it's the Gaussian, so it will depend on this width. Now suppose that I take gamma big. I don't know what big is. I just do it by hand here, so we see what happens. If you take gamma-- sorry gamma, sigma big, you start to get something very simple. And if I make it a bit bigger, it will probably start to look very much like a linear solution. If I make it small-- and again, I don't know what small is, so I'm just going to try. I's very small. You start to see what's going on. And if you go in between, you really start to see that you can circle out individual examples. So let's think a second what we're doing here. It is going to be again other hand-waving explanation. Look at this equation. Let's read out what it says. In the case of Gaussians, it says, I take a Gaussian-- just a usual Gaussian-- I center it over a training set point, then by choosing the ci I'm choosing whether it is going to be a peak or a valley. It can go up, or it can go down in the two-dimensional case. And by choosing the width, I decide how large it's going to be. If I do f of x, then I sum up all this stuff, which basically means that I'm going to have these peaks and these valleys and I connect them in some way. Now you remember before that I pointed out within the two-dimensional case what we draw is not f of x, but f of x equal to zero. So what you should really think is that f of x in this case is no longer an upper plane, but it's this surface. It goes up, and it goes down. And it goes up, and it goes down. So in the blue part, it goes up, and in the orange part, it goes down into valley. So what you do is that right now you're taking all these small Gaussians, and you put them in around blue and orange point, and then you connect their peaks. And by making them small, you allow them to create a very complicated surface. So what did we put before? So they're small. They're getting smaller, and smaller, and smaller. And they go out, and you see the-- there is a point here, so they circle it out here by putting basically Gaussian right there for that individual point. Imagine what happens if my points-- I have two points here and two points here-- and now I put a huge Gaussian around each point. Basically, the peaks are almost going to touch each other. So what you're imagine is that you get something, where basically the decision boundary has to look like a line, because you get something which is so smooth. It doesn't go up and down all the time. It's going to be-- And that's what we saw before, right? And again, I don't remember what I put here. So this is starting to look good. So you really see that somewhat something nice happens. Maybe if I put-- five is what we put before maybe. So basically what you're basically doing is that you're computing the center of mass of one class in the sense of the Gaussians. So you're doing a Gaussian mixture on one side, a Gaussian mixture on the other side, you're basically computing the center of masses, and then you just find the line that separates the center of masses. That's what you're doing here, and you just find this one big line here. So again, so we're not playing around with the number of points. We're not play around with lambda. But because this is basically what we already saw before. All I want to show you right now is the effect of the kernel. And here I'm using the Gaussian kernel, but-- let's see-- but you can also use the linear kernel. This is the linear kernel. This is using the linear least squares. If you now use the Gaussian kernel, you give yourself the extra possibility. Essentially, what you see is that if you put the Gaussian which is very big, in some sense you get back the linear kernel. But if you put the Gaussian which is very small, you allow yourself to this extra complexity. And so that's what we gain with this little trick that we did of replacing the inner product with this new kernel. We went from the simple linear estimators to something, which is-- It's the same thing-- if you want-- that we did by building explicitly these monomials of higher power, but here you're doing it implicitly. And it turns out that it's actually-- there is no explicit version that you can-- You can do it mathematically, but the feature representation, the variable representation of this kernel would be an infinitely long vector. The space of function that is built as a combination of Gaussians is not finite-dimensional. For polynomials, you can check that the space of function, it basically is a polynomial in d. If I ask you how big is the function space that you can build using this-- well, this is easy. It's just d-dimensional. With this, well, this is a bit more complicated, but you can compute. For this, it's not easy to compute, because it's infinite. So it in some sense is a non-parametric model. What does it mean? Of course, you still have a finite number of parameters in practice. And that's the good news. But there is no fixed number of parameters a priori. If I give you a hundred points, you get a hundred parameters. If I give you 2 million points, you get 2 million parameters. If I give you 5 million points, you get 5 million parameters. But you never hit a boundary of complexity, because these are in some sense as an infinite-dimensional parameter space. So of course, I see that here there are some of the part that I'm explaining are complicated, especially if this is the first time you see them. But the take-home message should be essentially from least squares, I can understand what's going on from a numerical point of view and bridge numerics and statistics. Then by just simple linear algebra, I can understand the complexity-- how I can get complexly-- which is linear in the number of dimension or the number of points. And then by following up, I can do a a little magic and go from the linear model to something non-linear. The deep reason why this is possible are complicated. But as a take-home message, A, the computation you can check easy. It remained the same. B, you can check that what you're doing is now allowing yourself to take a more complicated model, it's combination of the kernel functions. And then even just by playing with these simple demos, you can understand a bit what is the effect. And that's what you intuitively would expect. So I hope that it would get you close enough to have some awareness, when you use this. And of course, you can put-- when you abstract from the specificity of this algorithm, you build an algorithm with one or two parameters-- lambda and sigma. And so as soon as you ask me how you choose those, well, we go back to the first part of the lecture-- bias-variance, tradeoffs, cross-validation, and so on and so forth. So you just have to put them together. There is a lot of stuff I've not talked about. And it's a step away from what we discussed, so you've just seen the take-home message part, but we could talk about reproducing kernel hybrid spaces, the functional analysis behind everything I said. We can talk about Gaussian processes, which is basically the probabilistic version of what I just showed you now. Then we can all see the connection with a bunch of math like integral equations and PDEs. There is a whole connection with the sampling theory a la Shannon, inverse problems and so on. And there is a bunch of extension, which are almost for free. You change the loss function. You can make the logistic, and you take kernel logistic regression. You can take SVM, and you get kernel SVM. Then you can also take more complicated output spaces. And you can do multiclass, multivariate regression. You can do regression. You can do multilabel, and you can do a bunch of different things. And these are really a step away. These are minor modification of the code. And you can do a bunch of stuff. So the good thing of this is that with really, really, really minor effort, you can actually solve a bunch of problem. I'm not saying that it's going to be the best algorithm ever, but definitely it gets you quite far. So again we spent quite a bit of time thinking about bias-variance and what it means and used least squares and just basically warming up a bit with this setting. And then in the last hour or so, we discussed least squares, because it allows to just think in terms of linear algebra, which is something that-- one way or another-- you've seen in your life. And then from there, you can go from linear to non-linear. And that's a bit of magic, but a couple of parts-- which are how you use it both numerically and just from a practical perspective to go from complex models to simple models and vice versa-- should be-- is the part that I hope you keep in your mind. For now, our concern has just been to make predictions. If you hear classification, you want to have good clarification. If you hear regression, you want to do good regression. But you didn't talk about-- we didn't talk about understanding how did you do good regression? So a typical example is the example in biology. This is, perhaps, a bit old. This is micro-arrays. But the idea is the datasets you have is a bunch of patients. For each patient, you have measurements, and the measurements correspond to some gene expression level or some other biological process. The patients are divided in two groups, say, disease type A and disease type B. And based on the good prediction of whether a patient is disease type A or B, you can change the way you cure it or you address the disease. So of course, you want to have a good prediction. You want to be able-- when a new patient arrive-- to say whether it's going to-- this is type A or type B. But oftentimes, what you want to do is that you want to use this not as the final tool, because unless deep learning can solve this, you might go back and study a bit more the biological process to understand a bit more. So you use this as more statistical tools like measurements, like the way you can use a microscope or something to look into your data and get information. And in that sense sometimes, it's interesting to-- instead of just saying is this patient going to be more likely to be disease type A or B, it's to go in and say, ah, but when you make the prediction, what are the process that matters for this prediction? Is this gene number 33 or 34, so that I can go in and say, oh, these genes make sense, because they're in fact related to these other processes, which are known to be related, involved in this disease. And doing that, you use just as a little tool, then you use other ones to get a picture. And then you put them together. And then it's mostly on the doctor, or the clinician, or the biostatistician to try to develop better understanding. But you do use these as tools to understand and look into the data. And in that perspective, the word interpretability plays a big role. And here by interpretability I mean I not only want to make predictions, but I want to know how I make predictions and tell you, come afterwards with an explanation of how I picked the information that were contained in my data. So so far it's hard to see how to do it with the tools we had. So this is basically the field of variable selection. And in this basic form, the setting where we do understand what's going on is the setting of linear models. So in this setting basically, I just rewrite what we've seen before. You have x is a vector, and you can think of it, for example, as a patient. And xj are measurements that you have done describing this patient. When you do a linear model, you basically have that by putting a weight on each variables, you're putting a weight on each measurement. If a measurement doesn't matter, you think you might put here a zero. And it will disappear from the sum. If the measurement matters a lot, then here you might get a big weight. So one way to try to get the feeling of which measurements are important and which are not and to try to estimate and model, a linear model, where you get the w, but ideally we would like to get the w, which has many zeros. You don't want to fumble with what's small and what's not. So if you do least squares the way I showed you before, you would get a w. Then you would get-- most of them you can check that it will not be zero. In fact, none of them will be zero in general. And so now you have to decide what's small and what's big, and that might not be easy. Oops, what happened here? So funny enough, this is the name I found on how-- I don't remember the name of the book. It's the name that was used to describe the process of variable selection, which is a much harder problem, because you don't want to make predictions. But you want to go back and check how you make the prediction. And so it's very easy to start to get overfitting and start to try to squeeze the data until you get some information. So it's good to have a procedure that will give you somewhat a clean procedure to extract the important variables. Again, you can think of this as a-- basically, I want to build an f, but I also want to come up with a list or even better weights that tell me which variables are important. And often this will be just a list, which is much smaller than d, so that I can go back and say, oh, measurement 33, 34, and 50-- what are they? I could go in and look at it. Notice that there is also a computational reason why this would be interesting. Because of course, if d here is 50,000-- and what I see is that, in fact, I can throw away most of these measurements and just keep 10-- then it means that I can hopefully reduce the complexity of my computation, but also the storage of the data, for example. If I have to send you the datasets after I've done this thing, I've just to send you this teeny tiny matrix. So interpretability is one reason, but the computational aspect could be another one. Another reason that I don't want to talk too much is also-- remember that we had this idea, where we said we could document the complexity of a model by inventing features, and he said do I always have to pay the price of making it big? Well, I basically-- if you what-- I was pointing at-- I said, no, not always, because I was thinking of kernels. These, if you want give you another way potentially to go around in which what you do is that, first of all, you explode the number of features. You take many, many, many, many, and then you use this as a preliminary step to shrink them down to a more reasonable number. Because it's quite likely that among these many, many measurements, some of them would just be very correlated, or uninteresting, or so on and so forth. So this dimensionality reduction or computational or interpretable model perspective is what stands behind the desire to do something like this. So let's say one more thing and then we'll stop. So suppose that you have an infinite computational power. So the computation are not your concern, and you want to solve this problem. How will you do it? Suppose that you have the code for least squares. And you can run it as many times as you want. How would you go and try to estimate which variables are more important? AUDIENCE: [INAUDIBLE] possibility of computations. LORENZO ROSASCO: That's one possibility. What you do is that you have-- you start and look at all single variables. And you solve least squares for all single variables. Then you take all couples of variables. Then you get all triplets of variables. And then you find which one is best. From a statistical point of view there is absolutely nothing wrong with this, because you're trying everything. And at some point, you find what's the best. The problem is that it's combinatorial. And you see that when you're in dimension a few more then-- very few, it's huge. So it's exponential. So it turns out that doing what you just told me to do, which is what I asked you to tell me to do, which is this brute force approach is equivalent to do something like this is again a regularization approach. Here I put what is called the zero norm. The zero norm is actually not a norm. And it is just functional. It's a thing that does the following thing. If I give you a vector, you've to return the number of components different from zero, only that. So you go inside and look at each entry, and you tell if they are different from zero. This is absolutely not convex. And so this is the reason why this problem is equivalent-- it becomes a computation not feasible. So perhaps, we can stop here. And what I want to show you next is essentially-- if you have this-- and you know that in some sense, this is what you would like to do, if you could do it computationally, but you cannot-- so how can you find approximate version of this that you can compute in practice? And we're going to discuss two ways of doing it. One is greedy methods and one is convex relaxations. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Seminar_41_Eero_Simoncelli_Probing_Sensory_Representations.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. EERO SIMONCELLI: I'm going to talk about a bunch of work that we've been doing over the last-- it's about four years, on trying to understand basically, that terra incognita that Gabrielle just mentioned that lies between V1 and IT. I brought this back with me from the Dolomites, where I was last week with my family. And when you sit and you look at it and that image comes into your eyes and gets processed by your brain, there's a lot of information there. It's a lot of pixels. And the question that I'm going to start with is, where does it go? You have all this information. It's flooding into your eyes all day every day for your entire lifetime. Obviously, you don't store it all there. Your head doesn't inflate until it gets to the point of explosion. So where does it go? And as a theorist's diagram of the brain-- a square with rounded corners. In comes the information, and there are really only three options. You either act on the information, do something with it, sensory motor loops, or for complex organisms especially, a fair amount of it you might actually try to remember. You might hold on to, and we heard about that earlier today. But this really only accounts for, I think, a fairly small portion of what goes on, because a lot of it you throw away. You have to. You really don't have a choice. You have to summarize it, squeeze it down to the relevant bits that you're going to hold on to or act on, and the rest of it you just toss. So the question is, how can we exploit that basic fact? It's an obvious fact. It has to be true. How do we exploit that to understand something about what the system does and what it doesn't do? And there's a long history to this, and in fact, since I come from vision and most of my work is centered on vision and auditory system to some extent, the vision scientists were the first to recognize the importance of this. And it really is a foundational chunk of work in the beginning of the field that set in motion a lot of things that we currently know about vision. And so I'm going to just-- for those of you that don't know that story, I'm going to give a very, very brief reminder of what that is, because I think it's an absolutely fantastic scientific story. And then from there, I'll talk about texture. So two examples-- I'm going to quickly say something about trichomatic color vision, and then I'm going to talk about texture, and then we'll go into V2 and metamers and other things. So trichromacy-- Newton figured out that light comes in wavelengths. He split light with a prism. There's the picture drawing of him splitting light coming in through a hole in the wall. He split it with a prism into wavelengths, saw a rainbow, did a lot of experiments to recognize that you could take that rainbow and reassemble it into white light, but you couldn't further subdivide it, and basically gave us the foundations for thinking about light and spectral distributions. In the 1800s, a group of people that were combined physicists, mathematicians, and psychologists all rolled into one-- and there were quite a few of them. Helmholtz was one of them. Grassmann was one of the most important ones. I'll mention him again in a moment-- figured out something peculiar about human vision-- that even though there was this huge array of colors in the wavelengths in the spectrum, that humans actually had these deficits that we were not able to actually sense or discriminate things that it seemed like we should be able to. And it boiled down in the end, after a lot of study and discussion and theorizing, to this experiment, which is known as a bipartite color matching experiment. So on the left side of this little display, here's a gray annulus. In the middle is a circle. On the left side of this is light coming from some source. It has some spectral distribution illustrated here. This is all a cartoon, but just to give you the idea of how this works. On the right side are three primary lights. And the job of the observer in this experiment is to adjust, let's say, sliders or knobs in order to change the intensity of these three lights to make the light on the right side of this split circle look the same as the light on the left side. And it turns out that-- so just to be clear, so these three things have their own spectral distributions. They might look like that, for example. And when the observer comes up with the knob settings, they're going to produce something that might look like that. This is just a sum of the three copies of these spectra weighted by the knob settings. So this is a linear combination of three spectral distributions. And intentionally, I've drawn this so that they don't look the same, because that's the whole point of the experiment. It turns out that humans-- you can do this experiment, and that any human with normal color vision can make these settings so that these two things are absolutely indistinguishable. They look identical, and yet they knew even in the mid-1800s that these two things have very different spectra, and I've drawn it that way intentionally. So the point is that humans are obviously-- even though we can see all the bands of the spectrum, we can see all the colors-- we actually have this deficiency in terms of noticing the difference between these two things. So how can that be? And I think and hope that most of you know the answer to that question, because you're using devices every day that are exploiting this fact. But the bottom line is that in the 1850s Grassmann laid down a set of rules. Grassmann was a mathematician. He actually developed a large chunk of linear algebra in order to explain and understand and manipulate these ideas. And he pointed out that-- he actually had a set of laws that he laid out, and I won't drag you through all of that. But in the end, what all of those laws amounted to, taking into account all of the evidence that he had, he laid down these laws. And what it amounted to is that the human being, when setting these knobs, was acting like a linear system. The human was taking an input, which is a wavelength spectrum, and adjusting the knobs. And the settings of the knobs were a linear function of the wavelength spectrum that was coming into the eye. And it's a remarkable and amazing fact, if you know that the brain is a highly non-linear device, how is it that a human can act like a linear device? And the answer is that basically the human, taking this thing in and making the knob settings, has a front end that's linear and is doing a projection of the wavelength spectrum onto basically three axes. And those three measurements-- that process is linear. Everything that happens after that, which is complicated and non-linear and involves noise and decisions and all kinds of motor control and everything else-- as long as the information in those original three measurements is not lost, then the human is going to basically act like a linear system, in terms of doing this matching. So Grassmann realized this. The theory that he set out and that others then elaborated on perfectly explained all the data for normal human subjects, that lights that appear identical but had physically distinct wavelength spectra could be created, and they called these metamers-- two things that are physically different but look the same. This was codified. It took many, many decades. Things moved slower back then. We don't have these rapid, Google-style overturns of scientific establishment within a year or two. It took until the 1930s to actually build this into a set of standards that were used in the engineering community to generate and create color film, color devices, eventually color video, color monitors, color projectors, color printers-- everything else that we use. And these specifications were to allow the reproduction of colors so that they looked the way they were supposed to look. So you record color with a camera. It turns out that your camera is also only recording three color channels, just like your eye, and then you have to be able to re-render that on another device. And these standards specify how to do that. The surprising thing in the whole story is-- so this is 1850s. Well, we go back to Newton. It was a 1600s. Then in the 1850s, when we're getting this beautiful theory that's very, very precise, this gets built into engineering standards. And it's not until 1987 that it actually gets verified in a mechanistic sense. And I like to tell this story, because I think it's a reminder that aiming always for the reductionist solution is not necessarily the right thing to do. This is a very beautiful piece of science that was done at Stanford, actually, by Baylor, Nunn, and Schnapf. They took cones from a macaque-- I think originally they worked with turtles, but then macaque, sucked them up into a glass micro-pipette, shined monochromatic lights through them, and measured their absorption properties. And they found these three functions for three different types of cones and verified, basically, that these three absorption spectra perfectly explained the data from the 1800s. So this is an amazing thing, if you can have a theory and a set of behavioral experiments that make very precise and clear predictions that then get verified and tested in a mechanistic sense more than 100 years later, and they come out basically perfect. So it's an astounding, astounding sequence, in my view. So what we wanted to do is to set out trying to do the same kind of thing for pattern vision. And we're going to do that by thinking about texture. So what's a texture? A texture is an image that's homogeneous with repeated structures. So each of these are examples of texture. That's a piece of woven basket. This is tree bark. That's a herringbone pattern, and these are some sort of nuts or stones. And each of these has the property that there's lots of repeated elements with some variability. Sometimes there's more variability, sometimes there's less variability, but there's usually at least some. And of course, these things are ubiquitous. When I started working on this, which is about 15 years ago-- maybe a little bit more, 16 years ago-- I started photographing things that I saw as I walked around, and textures are everywhere. Most things are textured. The world is not made up of plain-- of Mondrians. It's not made up of things that are plain, blank colors separated by sharp edges. It's made up of textures, and often the boundaries between things are boundaries between things that are textured objects, like the seats in the auditorium, for example. So how is it that we can go about thinking about this in terms of metamers and representation in, let's say, the visual system? And the idea really comes from Julesz, who proposed in 1962 a famous theory that he later abandoned. The theory goes like this. First of all, he said the thing that we're going to do to try to describe textures is we're going to use statistics. And why statistics? Because these are supposed to be variables, so I need some stochasticity. But I also want something that's homogeneous, so I'm going to average or measure things averaged across the entire image. That's the statistical side of it. And he proposed that, well, if I start by measuring just pixel statistics-- say single pixel statistics, pairwise pixels statistics, maybe triples of pixels, eventually I should reach a point where I've made enough measurements to actually sufficiently constrain the texture such that any two textures that have the same statistics up to that-- whatever that order is, should look the same to a human being. And he didn't talk about this in physiological terms, but I think in the background is the notion that humans are actually measuring those statistics, and if you can get them right-- if you can make two images have the same statistics, and that's the only thing that humans are measuring, then those two images will look the same. So Julesz goes ahead with this, and eventually constructs by hand, because he did everything with binary patterns constructed by hand-- he constructs these two examples that are identical. He first falsifies the theory at n equals 2, and then he tries to do third-order statistics. And he comes up with these two examples-- counter-examples to the theory. These are matched in terms of their third-order statistics. It's not easy to see that or realize that, but it's true. If you take triples of pixels, and you take the product of those three, and you average that over the image, these two things are identical, but they look very different. And if you draw samples of each of these, it's very easy to label them as, let's say, A or B into these two categories. Here's another example that came out a bit later by Jack Yellott. These two things also are matched up to third-order. So Julesz decides that the theory is a failure, and he abandons it. And he begins a new theory, which is the theory of textons, which is a much less precisely-specified theory that has to do with laying down-- basically, it's a generative model, if you like. Everybody's fond of generative models these days, except for me. And he comes up with a generative model-- ah, and maybe Tommy. He comes up with the generative model, which is to lay down many copies of a small, repeating unit, which he called the texton. And so he came up with this method of generating texture images, which he went to town on, and he made lots of examples. The problem is that that wasn't a description of how to analyze texture images or how a human would analyze texture images, and so it became very difficult to bridge that gap. And I think, in my view, that the theory really never succeeded, and he should have stuck with the initial theory. Anyway, but that gave us an opportunity. So we went back many years later-- this is around 1999. I had a fantastic post-doc, Javier Portilla, who came from Spain, and we started thinking about texture and started putting together a model that was Juleszian in spirit, but a little bit different, because we wanted to build in a little bit of what we knew about physiology. Now, Julesz knew about physiology, because Hubel and Wiesel were doing all those experiments in V1 in the late '50S and the early '60s, but he really didn't incorporate that into his thinking. So what we did is build a very simple model. It's just dumb, stupid, simple, in which we took this description of V1 neurons. So these are oriented receptive fields. The idea is that this is a description of a neuron that takes a weighted sum of the pixels with positive and negative lobes. And it has a preferred orientation, because the positive and negative lobes have a particular oriented structure. And then it takes the output of that weighted sum and runs it through some rectifying, nonlinear function. And here's another, and this is a classic thing that Hubel and Wiesel described for a simple cell. And here's another one, which is a complex cell. And this one basically does two of these and combines them. I'm trying to avoid the details here, because they're not critical for understanding what going to show you. So then we took those things and we said, well, what if we measure joint statistics of those things over the image? So we're going to take not just these filters, but of course, we're going to do a convolution. That is, we're going to compute the response of this weighted sum at different positions throughout the image. We're going to rectify all of them. Now we're going to take joint statistics. What do I mean by that? Just correlations, basically-- second-order statistics of the simple cells, of the complex cells, of the cross statistics between them. And these statistics are between different orientations and different positions and also different sizes. And given that large set of numbers-- and typically for the images that we worked with back then, these were typically on the order of 700 numbers. So we have an image over here, which is say, tens of thousands or hundreds of thousands of pixels, being transformed through this box into a set of, let's say, 700 numbers. So 700 summary statistics to describe this pattern. And then the question is, how do we test the model? And for testing the model-- most people, when they test models like this, they do classification. This should sound very familiar these days, with the deep network world. They take a model, and then they run it on lots of examples. And they ask, well, do the examples that are supposed to be the same kind of thing, like the same tree bark-- do they come out with statistics that are similar or almost the same as each other? And can I classify or group them or cluster them and get the right answer when trying to identify the different examples? We decided that that was a very-- at least at the time, a very weak test of this model, because this is a high-dimensional space, and we had only, let's say, on the order of hundreds of example textures. And hundreds of-- that sounds like a lot of textures-- a couple hundred textures, but if the outputs live in a 700 dimensional space, then it's basically nothing. We're not filling that space. And for those of you that are statistically-oriented, you know that there's this thing called the curse of dimensionality. The number of data samples that you need to fill up a space goes up exponentially with the number of dimensions. So this was really bad news, and we decided that it was going to be a disaster to just do classification-- that pretty much any set of measurements would work for classification. So we were looking for a more demanding test of the model. And for that, we turned to synthesis. So the idea is like this. So you take this image. You run it through the model. You get your responses. Now we're going to take a patch of white noise. We're going to run it through the same model, and then we're going to lean on the noise, push on it-- push on all the pixels in that noise image until we get the same outputs. So this is sometimes called synthesis by analysis. This is not a generative model, but we're using it like a generative model. We're going to draw samples of images that have the same statistics by starting with white noise and just pounding on it until it looks right. And pounding on it means, for those of you that want to know, measuring the gradients of the deviation away from the desired output and just moving in the direction of the gradient. I'm giving you the quick version of this. A little bit more abstractly, we can think of it this way. There's a space of all possible images. Here's the original image. It's a point in this space. We compute the responses of the model, which is a lower dimensional space-- a smaller space. That's this. Because this is a many to one mapping and it's continuous, there's actually a manifold-- a continuous a collection of images over here, all of which have the same exact model responses. And what we're trying to do is grab one of these. We want to draw a sample from that manifold. If the theory is right-- if this model is a good representation of what humans see and capture when they look at textures, then all of these things should look the same. That's the hypothesis. And the way we do it, again, is to start with a noise seed-- just an image filled with noise. We project it onto the manifold. We push it onto this point. We can test that, because we can, of course, measure the same things on this image and make sure that they're the same as this image, and that's our synthesized image. So that's a abstract picture of what I told you on the previous slide. And then finally, the scientific or experimental logic is to test this by showing it to a human observer. So we have the original image, and then we compute the model responses. We generate a new image, and we ask the human, do these look the same? And if the model captures the same properties as the visual system, then two images with identical model responses should appear identical to a human. So that's the logic. And any strong failure of this indicates that the model is insufficient to capture what is important about these images. So it works, or I wouldn't be telling you about it. Here is just a few examples. There are hundreds more on the web page that describes this work. On the top are original photographs-- lizard skin, plaster of some sort, beans. On the bottom are synthesized versions of these. The lizard skin works really well. The plaster works quite well. The beans a little less so. And it depends-- whether it works well or not depends on the viewing condition. So if you flash these up quickly, people might be convinced that they all look really great. If you allow them to inspect them carefully, they can start to see deviations or funny little artifacts. So it's a partial success. And I should point out that it also provides a pretty convincing success on Julesz' counter-examples. So these are examples. This is synthesized from that, and this is synthesized from that, and they're easily classifiable. And there's fun things you can do with this. You can fill in regions around images. So if you take this little chunk of text here and you measure the statistics, and you say, fill in the stuff around it with something with has the same statistics, but try to do a careful job of matching up at the boundaries, you can create things like this. So you can read the words in the center, but the outside looks like gibberish. Each one of these was created in the same way. So the center of each of these is the original image, and what's around it is synthesized. So it works reasonably well. You can also do fun things like this. So these are examples where-- I told you we started from white noise, and then pushed it onto the manifold, but we can actually start from any image. So if we start from these images-- these are three of my collaborators-- two of my students and my collaborator Tony Movshon. If we start with those as starting point images, and we use these textures for each of them, we arrive at these images, where you can still see some of the global structure of the face. Because the model is a homogeneous model, it doesn't impose anything on global structure. And so if you seed it with something that has particular global structure or arrangement, it will inherit some of that. It'll hold onto it. Anyway, this is just for fun. Let's get back to science. So now, here's an example of Richard Feynman. This is Richard Feynman after he's gone through the blender. You can see pieces of skin-like things and folds and flaps, but it's all disorganized. Again it's a homogeneous model. It doesn't know anything about the global organization of this photograph. But what we want to know is-- so do we have a model that's just a model for the perception of homogeneous textures, or can we actually push it a little bit and make it, first of all, a little more physiological, and second of all, maybe a little bit more relevant for everyday vision? For me, standing here and looking at this scene, how do I go about describing something like this that's going on when I'm looking at a normal scene? So let's go through thinking about how to do this. So I'm going to jump right to this diagram of the brain again. So V1 is in the back of the brain. The information that comes into your eyes goes through the retina, the LGN, back to V1. And then it splits into these two branches, the dorsal and the ventral stream. The ventral stream is usually associated with spatial form and recognition and memory. So I'm going to think about the ventral stream, and we're going to try to understand what this model might have to say about processing in the ventral stream. I'm going to rely on just a few simple assumptions. First, that each of these areas has neurons, and that they respond to small contents or regions of the visual input. They're known as receptive fields. Most of you know that. In each visual area, I'm going to assume that those receptive fields are covering, blanketing the entire visual field. So there's no dead spots. There's no spots that are left out. Everything is covered nicely. And in fact, we know that this is true, for example, starting in the retina. So this is a cartoon diagram to illustrate the inhomogeneity that's found in the retina. So the receptive field sizes in the retina grow with eccentricity. And it turns out that that starts in the retina, but that's true, actually, all the way through the visual system and throughout the ventral stream, in particular. And this diagram is showing these little circles are about 10 times the size of your midget ganglion cell receptive fields in your retina. So you looking-- if you fixate right here in the center of this, these things are about 10 times the size of your receptive fields. And that's been long thought to be the primary driver of your limits on acuity, in terms of peripheral vision. So in particular, if you take this eye chart, which is modified by-- this was done by Richard Anstis back in the '70s, and you lay it out in this fashion, these things are about 10 times the threshold for visibility and recognition of these letters. And so you can say that the stroke widths of the letters are about matched to the size of these ganglion cells, and it works, at least qualitatively-- that things are scaling in the right way, in terms of acuity, and in terms of the size that the letters need to be for you to recognize them. And you can make pictures like this. This is after Bill Geisler, who showed that if you foveate-- if you fixate here, in fact, you can't see the details of the stuff that's far from your fixation point, and if you blur it, people don't notice. You can actually add high frequency noise to it, alternatively, and people won't notice that either. Because those receptive fields are getting larger and larger, and you're basically blurring out the information that would allow you to distinguish, let's say, these two things. When you look right at it you can see it, but if you keep your eye fixated here, you won't notice it. So let's work off of those ideas-- the idea of these receptive fields that are getting larger with eccentricity, that are covering the entire visual field. And let's notice the following-- so this is data taken-- physiological data from several papers that were assembled by Jeremy Freeman, who was a grad student in my lab. And here you can see the center of the receptor fields versus the size of the receptive fields. And you can see that in the retina-- I already showed you on the previous slide that it grows with eccentricity, but it's actually very slow compared to what happens in the cortex. V1, the receptor fields grow at a pretty good clip. V2, they grow about twice as fast as that, and V4 twice as fast again. Another way of saying this-- at any given receptive field location relative to the fovea-- let's say 15 degrees, the receptive fields in V1 are of a given size. It's on the order of 0.2 to 0.25 times. The diameter is 0.2 to 0.25 times the eccentricity. The receptive fields in V2 are twice that size, so about 0.45 times the eccentricity, and the receptive fields in V4 are twice that again. In cartoon form, it looks something like this. So here's V1. Lots of cells and small-ish receptive fields growing with eccentricity. Here's V2. They're bigger. They grow faster. Here's V4. And by the time you get to IT-- Jim DiCarlo was here a bunch of days ago, and he probably told you this-- almost every IT cell includes the fovea as part of its receptive field. They're very large, and they often cover half the visual field. So now we have to figure out what to put inside of these little circles in order to make a model, and I'm going to basically combine-- smash together the texture model that I told you about, which was a global homogeneous model, with this receptive field model. I'm going to basically stick a little texture model in each of these little circles. That's the concept. So how do we do that? Well, we're going to go back to Hubel and Wiesel. Hubel and Wiesel were the ones that said you make V1 receptive field simple cells out of LGN cells by just taking a bunch of LGN cells that line up. Here they are-- center surround receptive fields from the LGN, which are coming off of the center surround architecture of the retina. You line them up, you add them together, and that gives you an oriented receptive field, like the ones that I showed you earlier. And in more of a computational diagram, you might draw it like this. So here's an array of LGN inputs coming in. We're going to take a weighted sum of those. Black is negative. White is positive. So we add up these three guys, we subtract the two guys on either side, and then we run that through a rectifying nonlinearity that's a simple cell. Hubel and Wiesel also pointed out that you could maybe create-- or suggested that you create complex cells by combining simple cells. This is the diagram from their paper in 1962. And so we can diagram that like this. Here's basically three of these simple cells. They're displaced in position, but they have the same orientation. We halfway rectify all of them, add them together, and that gives us a complex cell. So it's interesting to note that the hook here is going to be that this is an average of these. An average is a statistic. It's a local average. So we're going to compute local averages, and we're going to call those statistics-- i.e. statistics, as in used in the texture model. So let's do that. So here's the V2 receptive field. Open that up. Inside of that is a bunch of V1 cells, here all shown at the same orientation. In reality, they would be all different orientations and different sizes. And now we're going to compute those joint statistics, just like I did in the texture model, and that's going to give us our responses. We're going to have to do that for each one of these receptive fields. So there's a lot of these. It's not 700 numbers anymore. It's reduced, because it's-- so there's details here. It's reduced, but there's a lot of these, so it's quite a lot of parameters. And these local correlations that I told you we were going to compute here can be re-expressed, actually, in a form that looks just like the simple and complex cell calculations that I showed you for V1. So in fact, if you take these V1 cells, and you take weighted sums of these guys, and you half-wave rectify them and add them, you get something that's essentially equivalent to the texture model that I told you about. So that's pretty cool, because it means that the calculations that are taking us from the LGN input to V1 outputs have a form, a structure which is then repeated when we get to V2. We do the same kind of calculations-- linear filters, rectification, pooling or averaging. And so that, of course, has become ubiquitous with the advent of all the deep network stuff. But the idea here is that we can actually do this kind of canonical computation again and again and again and produce something that replicates the loss of information and the extraction of features or parameters that the human visual system is performing. So this canonical idea, I think, is important, and it's something that we've been thinking about for a long time-- linear filtering that determines pattern selectivity, some sort of rectifying non-linearity, some sort of pooling. And we usually also include some sort of local gain control, which seems to be ubiquitous throughout the visual system and the auditory system in every stage, and noise, as well. And we're currently, in my lab, working on lots of models that are trying to incorporate all of these things in stacked networks-- small numbers of layers, not deep-- shallow, shallow networks for us-- in order to try to understand their implications for perception and physiology. This was just a description of a single stage, and then, of course, you have to stack them. And there are many people that have talked about that idea. This is a figure from Tommy's paper with Christof, I think-- 1999. And Fukushima had proposed a basic architecture like this earlier. And so I think this has now become-- you barely even need to say it, because of the deep network literature. So how do we do this? Same thing I told you before. Take an image, plop down all these V2 receptive fields. By the way, I should have said this at the outset-- this is drawn as a cartoon. The actual receptive fields that we use are smooth and overlapping, so that there are no holes. And in fact, the details of that are that since we're computing averages, you can think of this as a low pass filter, and we try to at least approximately obey the Nyquist theorem, so that there's no aliasing-- that is, there's no evidence of the sampling lattice, for those of you that are thinking down those lines. If you were not thinking down those lines, I'll just say the simple thing, which is that they're not little disks that are non-overlapping, because then we would be screwing everything up in between them. They're smooth and overlapping so that we cover the whole image, and all the pixels in the image are going to be affected by this process. So we make all those measurements. It's a very large set of measurements. And now we start with white noise, and we push the button. And again, push simultaneously on the gradients from all those little regions until we achieve something that matches all the measurements in all of those receptive fields. The measurements in the receptive fields are averaged over different regions. So the ones that are in the far periphery are averaged over large regions, and so those averages are throwing away a lot more information. The ones that are averaged near the fovea are throwing away a small amount of information. When you get close enough to the fovea, they're throwing away nothing. So the original image is preserved in the center, and then it gets more and more distorted as you go away from the fovea. So the question is, does that work for a human? Is it metameric? The display here is not very good, but I'll try to give you a demonstration of it to convince you that it does work. You have to keep your eyes planted here, and I'm going to flip back and forth between this original picture, which was taken in Washington Square Park, near the department. And I'm going to flip between this and a synthesized version. You have to keep your eyes here, at least for a bunch of flips. Hello. Here we go. Keep your eyes fixated. Those two images should look the same. It's going back and forth, A, B, A, B, and they should look the same. I think for most of you, and for most of these viewing distances, it should work. And now if you look over here, you'll see that they actually are not the same. That's about the size of a V2 receptive field, and it is the same two images. I'm not cheating here, in case anybody's worried. I'm just flipping back and forth between the same two images. And you can see that the original image has a couple of faces in that circle, but the synthesized one, they're all distorted, the same way Feynman was when I showed you his photograph. But again, the point here is that these two are not metamers when you look right at this peripheral region, but when you keep your eyes fixated here, they're pretty hard to distinguish. This is right at about the threshold for the subjects that we ran in this experiment, so it should be basically imperceptible to you. That was a demo, just to convince you that it seems to work. We did an experiment, because we wanted to do more than just show that it sort of works. We wanted to figure out whether we could actually tie it to the physiology in a more direct way, so what we did is we generated stimuli where we used different receptive field size scaling. So this is a plot. Along this axis is going to be-- just to get you situated, along this axis is going to be models that are used to generate stimuli with different receptive field size scaling. That's the ratio of diameter to eccentricity-- diameter of the receptive field to the eccentricity distance from the fovea. And along here is going to be the percent correct that a human is able to correctly identify-- the way we did this, it's called an ABX experiment. So we show one image, then we show another image, then we show a third image. And we say, which image does the third one look like? So we're going to plot percent correct here. And if we use a model with very small receptive fields, then we get syntheses that look like this. This one has very little distortion. There's a little bit of distortion around near the edges, but it's pretty close to the original. If we use really big receptor fields, then we get a lot of distortion. Things really start to fall apart. And somewhere in between-- so far to the right on this plot, we expect people to be at 100% noticing the distortions, and far to the left on this plot, we expect them to be at chance. We expect them to not be able to tell the difference. And that's exactly what happens. This is an average over four observers. And you can see that the performance, the percent correct starts at around 50%, and then climbs up and asymptotes. So what's more, we can now do something-- this is a little bit complicated to get your head around. We're using this model to generate the stimuli, and this is the model parameter plotted along this axis. Now we're going to use the model again, but now we're going to use the model as a model for the observer. So there's two models here. One is generating the stimuli. The other one, we're going to try to fit-- we're going to ask, if we used a second copy of the model to actually look at these images and tell the difference between them, what would its receptive fields have to be in order to match the human data? And I'm not going to drag you through the details, but the basic idea is that allows us to produce a prediction-- this black line-- for how this model would behave if it were acting as an observer. And by adjusting the parameter of the observer model, we can estimate the size of the human receptive fields. So the end result of all of this is we're going to fit a curve to the data, and it's going to give us an estimate of the size of the receptive fields that the human is using to do this task. And that is right here. In fact, it's right at the place where the curve hits the 50% line. That's the point where the human can't tell the difference anymore, and that's the point where we think an observer would be-- where the receptive fields of the stimulus would be the same size as the receptive fields of the observer. So that's what we're looking for. And when we do that for our four observers, they come out very consistent. So here's a plot of the estimated receptive field sizes of these observers. All four of them-- 1, 2, 3, 4, and the average over the four. And nicely enough-- remember, I told you that we know something about the receptive field sizes in-- these are macaque monkey. And if we plot those on the same plot, these color bands are the size of the receptive fields in a macaque, now combined over this large set of data from a whole bunch of different papers. Jeremy went through incredibly painstaking work to try to put these all into the same coordinate system and unify the data sets. And so the height of each of these bars tells you-- they're error bars on how much variability there is, where we think the estimates are. And you can see that the answers for the humans are coming right down on top of V2. So we really do think that the information that is being lost in these stimuli is being lost in V2, and it seems to match the receptive field sizes at least of macaque monkey. We were worried that this might depend a lot on the details of the experiment. So for example, we thought, well, what if we give people a little more information? For example, what if we let them look at the stimulus longer? So the original experiment was pretty brief-- 200 milliseconds. What if we give them 400 milliseconds? And so up here are plots for the same four subjects. The original task is in the dark gray, and you can see the curves for each of the subjects. When we give them more time, what you notice is that, in general, they do better. So generally, the light gray curves-- 1, 2, 3-- are above the dark gray curves. They get higher percent correct. But the important thing is that each of these curves dives down and hits the 50% point at the same place. In other words, what we interpret this to mean is that the estimate of the receptive field sizes is an architectural constraint, and we can estimate the same architectural constraint under both of these conditions, even though performance is noticeably different, at least for these three subjects. This one, it's really quite a big, big improvement. This subject is doing much, much better on the task when we give them more time. And yet, this estimate of receptive field sizes is pretty stable, so we thought this was a pretty important control. And down below is another control. That was a bottom-up control. This is a top-down control. People have talked about attention being very important in peripheral tasks, so we now gave the subjects an attentional cue-- a little arrow at the center of the display that pointed toward the region of the periphery where the distortion was largest in a mean-squared error sense. So we measure little chunks of the peripheral image and look for the place where there's the biggest difference, and we tell them to pay attention to that part of the stimulus. They're not allowed to move their eyes. We have an eye tracker on them the whole time, so they're not allowed to look at it. But we're telling them, try to pay attention to what's, let's say, in the upper left. And again, the result is quite similar. Their performance improves noticeably, at least for these three subjects. This one, again, is the most dramatic performance improvement. Nobody gets worse. This subject basically stayed about the same. But again, the estimates of receptive field size are quite stable. So our interpretation is attention is boosting the signal, if there is a signal, that allows them to do the task. But if they're at chance and there's no signal, attention does nothing, which is why that when you get to 50%, all these points coalesce. All the curves are hitting 50% at the same place. One last control-- we wanted to convince ourselves that really it was V2, and it wasn't just luck that we happened to get that receptive field size that matched the macaque data. So we did a control experiment where we tried to get the same result for V1. So this time, we just measure local oriented receptive fields like Hubel and Wiesel described, and we average them as in a complex cell over different sized regions. And we generate stimuli that are just matched for the average responses of the V1 cells. We don't do all the statistics on top of that that represents the V2 calculation. We're just doing average V1 responses. When we do that-- we generate the stimuli, we do the same experiment, we get a very different result in light gray here. So you can see that these curves are always higher than the other ones, but they also hit the axis at a much, much smaller value, usually by about a factor of two, which is just right, given what I told you before about receptive field sizes. So if we go back and we combine all the data on one plot-- down here are the V1 controls. They're about the right size for V1. And up here is the original experiment and the two controls that I told you about-- the extended presentation and the directed attention, and those are all pretty much lying in the range of V2. We think this has a pretty strong implication for reading speed. When you read, your eyes hop across the page. You do not scan continuously. You hop. And when you hop, here's an example of the kind of hops you do when you're reading. There's an eye position, and the typical hop distance would be about that-- from here to there. This is the same piece of text. We've synthesized it as a metamer using this model, just to illustrate the idea that the chunk of stuff that you can read around that fixation point, it's about right. It matches what you would expect for the kind of hopping that you could do. Your reading speed is limited by the distance of those hops, and the distance of those hops is limited by this loss of information. So you can't read anything beyond maybe this I and this N. And in order to read it, you hop your eyes over here, and now you get most of this word. You can make out the rest of an "involuntarily." So there's an interesting implication here, which is that you can potentially increase reading speed by using this model to optimize the presentation of text. And now that we can do these things electronically, you can imagine all kinds of devices where the word spacing and the line spacing and the letter sizes and everything else could change with time and position on the display. So you don't have to just put things out as static arrays of characters. You could now imagine jumping things around and rescaling things. You could imagine designing new fonts that caused less distortion or loss of information, et cetera. So this is just going back to the trichromacy story that I told you. I told you that once they figured out the theory, and they had all the psychophysics down, the next thing that happened is all that engineering. They came up with engineering standards, and they used it to design devices and specify protocols for transmitting images, for communicating them, for rendering them. I think that this has that kind of potential. And this theory is too crude right now, but if you had a really solid theory for what information survived in the periphery, you can really start to push hard on designing devices and designing specifications for devices for improved whatever. Sometimes you want to improve things. Sometimes you want to make things harder to see, like in this example. So you want to build camouflage. You go in, you take a bunch of photographs of the environment, and then you say, let's design a camouflage that best hides itself when it's not seen directly within this environment. So you could use these kinds of loss of information to exploit things or to aid things in terms of human perception. So let me say just a few things about V2, and then maybe I should stop. So this work that Jeremy and I did in building this model for metamers, which is a global version of the texture model that operates in local regions, led us to start asking questions about what we could learn by actually measuring cells in V2. And we joined forces with Tony Movshon, who is the chair of my department and a longtime collaborator and friend. And we started a series of experiments to try to explore presentations of texture to V2 neurons to try to understand what we could learn about the actual representations of V2. And these are all done in macaque monkey. And I should also mention that V2 is-- it's been studied for a long time. Hubel and Wiesel wrote a very important paper about V2 in 1965, which was quite beautiful, documenting the properties that they could find. But the thing that's interesting about this is that V1 didn't really crack until Hubel and Wiesel figured out what the magic ingredient was. And the magic ingredient was orientation. Before Hubel and Wiesel, people have been poking at primary visual cortex, showing little spots of light and little annuli-- all the things that worked really well in the retina and the LGN, and they were not getting very interesting results. They were saying, well, the receptive fields are bigger and there are hot spots, positive and negative regions, but the cells are not responding that well. And when Hubel and Wiesel figured out that orientation was the magic ingredient-- and the apocryphal story is that they did that late at night, and they figured it out when they were putting a slide into the projector, and they had forgotten to cover the cat's eyes. And they put the slide into the projector, and the line at the edge of the slide went past on the screen-- TOMMY: it was broken. EERO SIMONCELLI: It was broken. Ah, I always thought it was the edge of the slide. I've fibbed, and Tommy has corrected me that it was something broken in the slide. But in any case, the point is that a boundary went by, and they heard-- so they played the spikes through a loudspeaker. This is what most physiologists did in those days, and even still a lot do. Certainly, in Tony's lab you can always walk in there and hear the spikes coming over the loudspeaker. Anyway, they heard this huge barrage of spikes, more than they had ever heard from any cell that they had recorded from, and that was the beginning of a whole sequence of just fabulous work. And using that tool-- very simple and very obvious in retrospect, but absolutely critical for the progress. The stimuli matter is the point, and making the jump to the right stimuli changes everything. So V2 for the last 40 years has been sitting in this difficult state where people keep throwing stimuli at it. They try angles. They try curves. They try swirly things. They try corners. They try contours of various kinds, illusory contours. And throughout all of this, the end story is V2 cells have bigger receptive fields, many of them respond to orientation, some of them respond to particular combinations of orientation, but it's usually a small subset, and the responses are weak. And that's really what the literature has looked like for 40 years. So what we were after is, can we drive these cells convincingly and in a way that we can document is significantly different than what we see in V1? That was the goal-- find a way to drive most of the cells and to drive them differently than what one would expect in V1. As a starting point, we succeeded with textures. So basically, we took a bunch of textures. Here are some example textures drawn from the model. Down below are spectrally-matched equivalents. So these things have the same power spectra, the same amount of energy and different orientation and frequency bands, but they lack all the higher-order statistics that are coming in this texture model that give you nice, clean edges and contours and object-y things, or lumps of objects. And sure enough-- so here's some example cells. Here's three V1 cells. Here's three V2 cells. And in each of these plots, there's two curves. These are shown over time. The stimulus is presented here for 100 milliseconds. You see a little bump in the response. And there's a light curve and a dark curve. The light curve is the response to the spectrally-matched noise, and the dark curve is the response to the texture, with the higher-order statistics. V1 doesn't seem to care is the short answer here, and V2 cares quite significantly. So when you put those higher-order statistics in, almost all V2 cells respond significantly more, and you can see that in these three examples. These are not unusual. That's what most of the cells look like. So here's a plot, just showing you 63% of the V2 neurons significantly and positively modulated. And by the way, this is averaged over all the textures that we showed them. And if you pick any individual cell, there's usually a couple of textures that drive it really well, and then a bunch of textures that drive it less well. So this effect could be made stronger if you chose only the textures that drove the cell well. And up here is V1, where you can see that very few of them are modulated by the existence of these higher-order statistics. Oh, here it is across texture category. So now on the horizontal axis is the texture category-- 15 different textures, and you can see, again, that V1 is pretty much very close to the same responses-- dark and light, again, for the spectrally-matched and the higher-order. And for these three V1 cells, they're basically the same responses for each of these pairs. And for the V2 cells, there are always at least some textures where there's an extreme difference. So this is a really good example. There's a huge difference in response here for these two textures, but for actually many of the other textures, there's not much of a difference. So sort of a success. And the last thing I was going to tell you about is that we think-- so this is really fitting, given what Jim DiCarlo told you about, or what I assume he told you about-- this idea of tolerance or invariance versus selectivity. We wanted to know, how can we take what we know about these V2 cells and pull it back into the perceptual domain? How can we ask, what is it that you could do with a population of V2 cells that you couldn't do with a population of V1 cells? And the thought was if the V2 cells are responding to these texture statistics, then if I made a whole bunch of samples of the same texture, the V2 cells should be really good at identifying which texture that is-- which family it came from. And the V1 cells will be all confused by the fact that those samples each have different details that are shifting around. So the V1 cells will respond to those details, and they'll give a huge variety of responses invariant to re-sampling from that family, and the V2 cells will be more invariant or more tolerant to re-sampling from that family. That was the concept. And that turns out to be the case, so let me show you the evidence. So here's four different textures, four different-- what we call different families. Here's images of three different examples drawn from each. So these are just three samples drawn, starting with different white noise seeds. And you can see that they're actually physically different images, but they look the same. Three again. Three again. And so we got 100 cells from V1 and about 100 cells from V2. The stimuli are presented for 100 milliseconds. We do 20 repetitions each. We need a lot of data. And what's shown here are just these this 4 by 3 array, but we actually had 15 different families and 15 examples of each. 20 repetitions of each of those. 225 stimuli times 20 repetitions. That's the experiment. So what we wanted to know is, does the hypothesis hold? And so here's an example. These are responses laid out for these 12 stimuli. And what you can see is that this is a V1 neuron-- a typical V1 neuron. You can see that the neuron actually responds with a fair amount of variety in these columns. That is, for different exemplars from the same family, there's some variety. High response here, medium response here, very low response here. And this is for these three images, which to us look basically the same. So this cell would not be very good at separating out or recognizing or helping in the process of recognizing which kind of texture you were looking at, because it's flopping all over the place when we draw different samples. That, as compared to having to V2 cell-- this is a typical V2 cell, which you can see is much more stable across these columns. This is roughly the same response here, roughly the same here, little bit of variety in this one, roughly the same in this one. And sure enough, if you actually go and plot this, V2 has much higher variance across families. That's vertically. These are the V1 cells. These are the V2 cells. And this is the variance across families. This is the variance across exemplars. V2 has higher variance typically across families, and V1 has higher variance across exemplars. And now if you take the populations of equal size-- 100 of each, and you ask well, how good would I be at taking that population and identifying which family, which kind of texture I'm looking at? And we do this with cross-validation and everything. I can give you the details later, if you want to know. We find V2 is always better than V1 in doing this task. So we can do a better job in performing this task-- identifying which of these families a given example was drawn from if we look at V2 than if we look at V1. And if we flip that around and we try to do exemplar identification, with 15 different examples of a given family-- if we say, which one was it? It turns out that V1 is better than V2 for that. So we think of this as evidence that V2 has some invariance across these samples, whereas V1 is much more specialized for the particular samples. This work started with this fantastic post-doc that I had mentioned earlier, Javier Portilla. Jeremy Freeman came into my lab, and we just jumped all over this in making the metamers. Josh McDermott is on here because I usually also play the auditory examples and walk through a little bit of that work, but I'm going to leave that for him. And Corey Ziemba, who's a student who's in the lab right now and is doing a lot of the physiology and did a lot of the physiology that I showed you in Tony's lab. And we were funded by HHMI and also the NIH. So thanks. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Unit_8_Panel_Robotics.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PATRICK WINSTON: Well, I suppose my first question has to do with some remarks that Tony made about Rod Brooks. I remember Rod Brooks' work for the one great idea he had, which was the idea of subsumption. And the idea of subsumption was to take the notion of procedural and data abstraction from ordinary programming and elevate it to a behavior level. And the reason for doing that was that if you weren't working so well at one level, you would appeal to another level to get you out of trouble. So that sounds like a powerful idea to me. And I'm just very interested in what the panelists construe to be the great principles of robotics that have emerged since then. Are there great principles that we can talk about in a classroom without just describing how a particular robot works? STEFANIE TELLEX: So we were talking about this ourselves a little bit and sort of asking ourselves what makes a systems paper? And what do you write down in one of those papers as these general principles that you extract from building a system? Because it seems like there's two kinds of papers in robotics-- the systems paper, where you said, I built this thing and here's kind of how it works. Where it's hard to extract, I think, general principles from that. It's like, I built this and this and this, and this is what it did. Here's a video. But it does something amazing, so it's cool. And then there's, like, kind of algorithm papers, which tend to get more citations and I don't know. And they usually work because they have this chunk of knowledge, subsumption architectures, RRT Star's one of my favorite examples. It's the kind of paper, there's an algorithm, and then there's math that shows how the algorithm works, some results that they show. And there's this nugget that transfers from your brain-- to the author's brain to your brain. And I think it's hard to know what that nugget is when you've built a giant system. One of the things that I've been thinking about that might be what that might look like is kind of design patterns for robots. This is a concept from software engineering. It's at a higher level abstraction than a library or something that you share, but it's things about the ways that you put software together. So if you're a hacker, you probably heard of some of these patterns like singleton and facade and strategy. They have these sort of evocative names from this book, Gang of Four is the nickname. And I think there's a set of design patterns for robotics that we are slowly discovering. So when I was hanging out in Seth Teller and Nick Roy's for my post-doc, there was one that I really got in my head, which was this idea of pub/sub-- publish and subscribe. You're talking about YARP and LCM and Russ. They all had this idea that you don't want to write a function to call to get the autodetection results. You just want you detector blasting out the results as fast as it can all the time, and then break that abstraction. And you get a lot of robustness in exchange for that. I think that's a design pattern for robots. I think there's probably about 30 more of them. And I bet Russ knows a lot of them. And Albert and Ed-- there's people who know them maybe, but they're not written down. I think one thing I'd like to do is write some more of them down. Sorry I didn't. PATRICK WINSTON: Russ, what do you think? Your talk seemed to focus on optimization as the answer. RUSS TEDRAKE: It's a framework that I think you can guess a lot of the problems and get clear results. I think looking across you can point to clearer sort of ideas that worked very well. So I think for estimation Bayes Rule works really well. And you know, Monte Carlo estimation has worked really well for planning in high dimensional spaces. Somehow randomization was a magical bullet where people start doing RRT-type things as well as trajectory optimization-type things. I do think that the open source movement and the ability to write software and components and modules has been a huge, huge thing. And I do think that at the low-level control level is optimization-based controllers have been a magical thing. I think any one of these-- in any one of these sub-disciplines, you can point to a few real go-ahead ideas that have rocked our world, and everybody gets behind them. You know, I think maybe the biggest one of all, actually, has been LIDAR. And then the ability-- I think sensing has really come online and been so enabling in the last few years that I think if you look back at the last 15, 20 years of robotics, the biggest point changes I think has been with the sensors-- sensors upped their frame rate resolution, gave depth. When LIDAR and Kinect came out, those just changed everything. PATRICK WINSTON: Before we leave the subject of Brooks and subsumption, Tony, you brought it up, and there was a little exchange between you and Russ about why it might be useful. I noted that both Russ and John talked about two-- one respect-- one each major blunders that their machines made. Do you construe that any of Brooks's stuff might have been useful in avoiding those kinds of blunders? TONY PRESCOTT: Potentially. But I think these are really challenging things that we're trying to do with these robots. So the biomimetic approach that I take, I think, is partly I want to mine biology for insights about how to solve problems. And the problems that we're realizing are hard in robotics are the problems that are hard in biology as well. So I think we underestimated the problem of manipulating objects. But if you look in biology, what other species apart from us has really solved that problem to the level of dexterity? An octopus trunk, maybe but-- sorry, elephant trunk. So I think that these challenges take a-- prove to be much more difficult than we might think. And then other things that intuitively seem hard are actually quite easy. So the path that we're taking in some of our biomimetic robots, like the MiRo robot toy is to do stuff which actually people think looks hard, but it's relatively easy to do, and these are solvable problems. And then we can progress towards what are obviously the harder problems, and where the brain has dedicated a lot of processing power. And I think object manipulation, if you look in the brain, is massive representation for the hand. And there's all these extra motor systems in cortex that aren't there in non-mammals. And even in simpler mammals they don't have motor cortex. We've developed all these extra motor cortical areas, direct corticospinal projections. All of this is dedicated to the manipulation problem. So I think we're finding out what the hard problems are by trying to build the robots. And then we can maybe find out what the solutions are by looking at the biology. Because in that case, particularly, you have low level systems for-- that can do grasp, and that are there in reptiles. And then you have these additional systems in mammals, and particularly in primates, that can override the low-level systems to do dexterous control. PATRICK WINSTON: John, you look like you're eager to say something. JOHN LEONARD: I just want to talk about subsumption, because it had such a huge effect on my life as a grad student. It was sort of like the big thing in 1990 when I was finishing up. But I really developed a strong aversion to it, and so I tried to argue with Rod back then, but not very successfully. I would say, when will you build a robot that knows its position? And he would say, I don't know my position. But I think some of the biological evidence, like the grid cells and things that maybe at an autonomic subconscious level there is sort of position information in the brain. But subsumption, I think, as a way of trying to strive for robustness where you build on layers, that's a great inspiration. But I feel that the intelligence without representation sort of work that Rod did, I just don't buy it. I think we need representation. PATRICK WINSTON: I guess there are two separable ideas. JOHN LEONARD: Yes. PATRICK WINSTON: The no representation idea and the layering idea. JOHN LEONARD: Yeah. I like the layering, and I'm less keen on the no representation. RUSS TEDRAKE: I'm curious if Giorgio-- I mean, you showed complicated system diagrams. And you're obviously doing very complicated tasks with a very complicated robot. Do you think-- do you see subsumption when you look at those-- GIORGIO METTA: Well, it's actually there. I don't have time to enter into the details. But the way it's implemented, ER, allows you to do subsumption or other things. There's a way to physically take the modules and, without modifying the modules, connect them through scripts that can insert logic on each module to sort of preprocess the messages and decide whether to subsume one of the module or not, or activate the various behaviors. So in practice, can be a subsumption architecture, a purer one or whatever other combination you have in mind for the particular task. RUSS TEDRAKE: Maybe a slightly different question is did the subsumption view of the world shape the way you designed your system? GIORGIO METTA: It may have happened without knowing, because that piece of software started at MIT while I was working with Rod. But we didn't try to build the subsumption architecture in that specific case. But the style, maybe of the publish-subscribe that we ended up doing was derived from subsumption in a sense. Was in spirit, although not in the software itself. But going back to whether there's a clear message that we can take, certainly I will subscribe to Stefanie message of the recyclable software, the fact that we can now build these modules and build large architectures. This allows doing experiments that we never dreamed of until a few years ago. So we can connect many powerful computers and run vision and control optimisation very efficiently, and especially if recycling the software. So you don't have to implement inverse kinematics everyday. PATRICK WINSTON: Well, I think the casual observer this afternoon, one good example being me, would get the sense that, with the exception of Tony, the other four of you are interested in the behavior, but not necessarily interested in understanding the biology. Is that a misimpression or is that correct? And when you mention LIDAR, for example, that's something I don't think I use. And it's doing something-- it's enabling a robot with a mechanism that is not biological. So to what degree are any of you interested in understanding the nature of biological computation? JOHN LEONARD: I care deeply about it. I just don't feel I have the tools or the bandwidth to really dive into it. But every time I talk to Matt Wilson I leave feeling in awe, like I wish I could clone myself and hang out across the street. GIORGIO METTA: Well-- Yeah. Go ahead. STEFANIE TELLEX: I get it. I kind of feel the same. I mean, I'm an engineer. I'm a hacker. I build things. And I think that's the way that I can make the most progress towards understanding intelligence is by trying to build things. But every time I talk to Josh, you know, I learn something new, and Noah and Vikash. PATRICK WINSTON: Say that again? STEFANIE TELLEX: Every time I talk to Josh and Noah Goodman and Bertram Malle, people from the psychology and cognitive science [INAUDIBLE],, I learn something and I take things away. But I don't get excited about trying to build a faithful model that incorporates everything that we know about the brain, because I just don't-- I can't put all that in my brain. And I feel that I'm better guided by my engineering intuition and the things that I've learned by trying to build systems and seeing how that plays out. PATRICK WINSTON: On the other hand, Tony, you are interested in biology and you do build stuff. TONY PRESCOTT: Yeah. I'm interested in it, because I trained originally a psychologist. So I came into robotics in order to build physical models. PATRICK WINSTON: So why do you build stuff? TONY PRESCOTT: Because I think the theory is the machine. Our theories and psychology and neuroscience are never going to be like theories are in physics. We're not going to be able to express them concisely and convince people of them in a short paper. We're going to be able to, though, build them into machines like robots and show people that they do behavior, and hopefully convince people that way that we have a complete theory. PATRICK WINSTON: So it's a demonstration purpose? Or a convincing purpose? TONY PRESCOTT: It's partly to demonstrate the sufficiency of the theory. I think that's the big reason. But another motivation that has grown more important for me is to be able to ask questions of biologists that that wouldn't occur to them. Because I think the engineering approach-- you're actually building something-- raises a lot of different questions. And those are then interesting questions to pursue in biological studies and questions that might not occur to you otherwise. So I go back to Brightenburg's comment that when you try to understand a system, there's a tendency to overestimate its complexity, and that when you do synthesis that's a whole lot different from analysis. And actually, with the brain, we tend to either underestimate or overestimate its complexity and we rarely get it right. So the things that we think are complex sometimes turn out to be easy. So an example would be in our whisker system, it's really quite easy to measure texture with a whisker. And there's lots of different ways of doing that that work. But intuitively you might not have thought that. But getting shape out of whiskers is harder, because it's an integration problem across time, and you have to track position and all these other things. So these things turn out to be hard. So I think trying to build synthetic systems helps us understand what are the real challenges the brain has to solve, and that's interesting for me. PATRICK WINSTON: Is there an example of a question that you didn't know is there when you started? TONY PRESCOTT: Yeah. PATRICK WINSTON: And you wouldn't have found if you hadn't attempted to build? TONY PRESCOTT: So when we started trying to build artificial whiskers, the engineers that were building the robot said, well, how much power does the motor have to have that drives the whisker? I mean, what happens when the whisker touches something? Does it continue to move and bend against the surface? Or does it stop? And well, we said we'll look that up in the literature. And of course, there wasn't an experiment that answered that question. So at that point we said, OK, we'll get a high speed camera and we'll start watching rats. And we found that when the whiskers touch, they stopped moving very quickly. So they make a light touch. And intuitively, yeah, maybe-- because we make a light touch. Obviously, we don't bash our hands against surfaces. But it's not obvious, necessarily, when you have a flexible sensor that that's what you would do. And in some circumstances, the rats allow their whiskers to bend against objects. So understanding when you make a light touch and when you bend was really a question that became important to us after we'd started thinking about how to engineer the system. How powerful do the motors need to be? PATRICK WINSTON: I'm very sympathetic to that view, being an engineer myself. I always say if you can't build it, you don't really understand it. TONY PRESCOTT: Yeah. PATRICK WINSTON: So many of you-- all of you have talked about impressive systems today. And I wonder if any of you would like to comment on some problem you didn't know that was there and you wouldn't have discovered if you hadn't been building the kinds of stuff that you have built. RUSS TEDRAKE: It's a long list. I mean, I think we learn a lot every day. Let me be specific. So with Atlas, we took a robot to a level of maturity that I've never taken before. I see videos from companies like Boston Dynamics that are extremely impressive. I think one of the things that separates a company like that from the results you get in a research lab is incredible amounts of hours, sort of a religion to data logging and analysis, and sort of finding corner cases, logging them, addressing them, incremental improvement. And researchers don't often do that. And actually, I think a theme that, at least in a couple of the talks, was that maybe this is actually a central requirement. And in some sense, our autonomy really should be well suited to doing that, to maybe automatically finding corner cases and proving robustness and all these things. But the places that broke our theory were weird. I mean, so the stiction in the joints of Atlas is just dominant. So we do torque control, but we have to send a feedforward velocity signal to get over friction. If we're started at zero and we have to start moving, if we don't send a feedforward velocity signal, our model is just completely wrong. When you're walking on cinder blocks and you go near the ankle limit in pitch, there's a strange coupling between the mechanism which causes the ankle to roll. And it'll kick your robot over just like that, if you don't watch for it. And we thought about putting that into our model, addressing it with sophisticated things. It's hard and ugly and gross. And it's possible, but we do things to just-- you know, Band-Aid solution that. And I think there's all this stuff, all these details that come in. I think the theory should address it all. I think we did pretty well. I'd say we got 80% of the way there, 70%, 80% of the way there with our theory this time. And then we just decided that there was a deadline and we had to cover some stuff up with band-aids. But that's the good stuff. That's the stuff we should be focused on. That's the stuff we should be devoting our research efforts on. PATRICK WINSTON: If you were to go into a log cabin today and write a book on Atlas and your work on it, what fraction of that book would be about corner cases and which fraction would be about principles? RUSS TEDRAKE: We really stuck to principles until last November. That was our threshold. November, we had to send the robot back for upgrades. We said, until then, we're going to do research. The code base is going to be clean. And then when we got the robot back in January, we did everything we needed to to make the robot compete in the challenge. So I think 70% or 80% of the way, we got there. And that it was just putting hours on the robot, finding those screw cases. And then, if I were to write a book in five years, I hope it would be-- PATRICK WINSTON: Is there a principle on that ankle roll? Or was that-- RUSS TEDRAKE: Oh, absolutely. We could have thrown that into the model. It just would have increased the dimensionality. It would have been a non-linear term. We could have done it, we just didn't have time to do it at the time. And it was going to be one of many things we would have had to do if we had taken the principle approach throughout. There's other things that we couldn't have put nicely into the model, that we would have needed to address. And that should be our agenda. TONY PRESCOTT: If that was your own robot, would you have just re-engineered the ankle to make that problem less of an issue? RUSS TEDRAKE: It wasn't about the ankle. It was about the fact that there's always going to be something unmodeled that's going to come up and get you. And with robots, I think we're data starved. We don't have the big data problem in robotics yet. And I think you're limited by the hours you can put on your robot. We need to think about how do you aggressively search for the cases that are going to get you? How do you prove robustness to unmodeled things? I think this is fundamental. It's not a theme I would have prioritized if I hadn't gotten this far with a robot. PATRICK WINSTON: But Giorgio, what about building iCub? Were there big problems that emerged in the course of building that you hadn't anticipated? GIORGIO METTA: Well first of all, there's a problem of power. I guess for Atlas it's very different. But for electric motors, you're always short of space, short of space where to put the actuators. And you start filling the available-- if you have a target size, you start filling it very quickly. And there's no room for anything else. And then you start sticking the electronics wherever you can, and cables and everything. Certainly if-- I mean, a difference in design from the biological actuators and the artificial actuators to make life very difficult. And especially when you have to go through something like a hand, where you like to have a lot of degrees of freedom. But there's no way you could actually build it, so you have to take shortcuts here and there. And I guess the same is true, then, for computation. And you resort to putting as many micro-controllers as you can inside the robot, because you want to have efficient control loops. And then you say, well, maybe have a cable for a proper image processing because there's no way you can squeeze that into the robot itself. It's not surprising. It's just a matter of when you're doing to design, you soon discover that there are limitations you have to take into account. I don't know whether it is surprising. I mean, I guess we learn the lesson across many years of design. We designed other robots before the iCub. We sort of-- I thought we knew where the limits where with the current technology. PATRICK WINSTON: I wonder if the-- you say you learned a lot building iCub. I wonder if this knowledge is accessible. It's knowledge that you discussed in meetings and seminars, and thought about at night and fixed it the next day. Is any of it-- if I wanted to build iCub and couldn't talk to you, would I have to start from scratch? I know you've got the stuff on the web and whatnot, but-- GIORGIO METTA: Yeah. That's probably enough for-- PATRICK WINSTON: --reasons in them GIORGIO METTA: Sorry? PATRICK WINSTON: Your web material has designs, but it doesn't have reasons. GIORGIO METTA: Yeah. Yeah, that's-- that's right. No, the other thing we documented the process itself. So that information, I don't know, resides in the people that actually made the choices when we were doing the design. There's one other thing that maybe is important, is that-- so the iCub is about 5,000 parts. And that's not good, because there are 5,000 parts that can break. And that may be something interesting for design of materials for robots, or new ways of building the robots. And at the moment, basically everything that could potentially break has happened, that it failed on the iCub over many years. Even parts that theoretically we didn't think could break, well, they could. But we estimated maximum torques, whatever, and then it happened. Somebody did something silly and we broke a shoulder. It's a steel part that we never thought it could actually break. And it completely failed. I mean, those type of things are maybe interesting for future designs, or for either simplifying the number of parts or figuring out ways of doing less parts or, let's say, different ways of actually building the mechanics of the robot. PATRICK WINSTON: I suppose I bring it up because some of us in CSAIL are addressing-- not me, but some people in CSAIL are interested in how you capture design rationale, how you capture those conversations, those whiteboard sketches and all of that sort of thing so that the next generation can learn by some mechanism other than apprenticeship. But let's see. Where to go from here? iCub is obviously a major undertaking and Russ had been working like a slave for three years on Atlas robot. Do you-- I don't know quite how to phrase this without being too trumpish. But the soldering time to thinking time must be very high on projects like this. Is that your sense? Or do you think that the building of these things is actually essential to working out the ideas? Maybe that's not quite the question I'm going to ask. Maybe a sharper question is given the high ratio of soldering time to thinking time, is it something that a student should do? RUSS TEDRAKE: I'm lucky that someone else built the robot for us. Giorgio has done much more than we have in this regard. PATRICK WINSTON: Well, by soldering time you know-- RUSS TEDRAKE: I know. Yeah, yeah, sure. We-- PATRICK WINSTON: It's a metaphor. RUSS TEDRAKE: Yeah. but we got pretty far into it with the good graces of DARPA and Google slash Boston Dynamics. The software is where we've invested our solder time. A huge amount of software engineering effort. I spent countless hours on setting up build servers and stuff. Am I stronger, you know, am I better for it? I think having invested, we can do research very fast now. So I'm in a new position to be able to try really complicated ideas very quickly because of that investment. It was actually-- I knew going in what I was going to be doing. I saw what John and other people got out of being in the Urban Challenge, including in especially the tools, like the LCM that we've been talking about and stuff today. And I wanted that for my group. So it was a very conscious decision. I'm at a place now that we can do fantastic research. Every one of the students involved in great research work on the project. We hired a few staff programmers to help with some of the non-research stuff. And I think the hardware is important. It's hard to balance, but I do think it's important. PATRICK WINSTON: Just one short follow-up question there. Earlier you said that some of your students didn't want to work on it. And why was that? Was that a principal reason? RUSS TEDRAKE: People knew how much soldering time there was going to be. Right? And the people that had their research agenda and it was more theoretical, and they didn't want that soldering time. Other people said, I'm still looking for ideas. This is going to motivate me for my future work. They jumped right in. And super strong students made different decisions on that. PATRICK WINSTON: And they both made the right decisions. RUSS TEDRAKE: I think so. PATRICK WINSTON: Yeah. RUSS TEDRAKE: Yeah. PATRICK WINSTON: But John, you've also been involved in-- well, the just driving car thing was a major DARPA grand challenge. Some people have been critical of these grand challenges because they say that, well, they drive the technology up the closest hill, but they don't get you on a different hill. Do you have any feelings about these things, if they're good idea in retrospect, having participated in them? JOHN LEONARD: Let's see. I'm really torn on that one because I see how the short term benefits of the community-- and you can point to things like the Google car-- that there's a clear impact. But DARPA does have a mindset that once they've done something, they declare victory and move on. So now if you work, say, on legged locomotion, which one of my junior colleagues does, DARPA won't answer his emails. It's like, OK, we did legged locomotion. And so I think that the challenge is to be mindful of where we are in terms of the real long-term progress. And it's not an easy conversation to have with the funding agencies, but-- PATRICK WINSTON: But what about a brand new way of doing something that is not going to be competitive in terms of demonstration for a while? Is that a problem that's amplified by these DARPA grand challenges? I mean, take chess, for example. If you had a great idea about how humans play chess, you would never be competitive with Deep Blue, or not for a long time. So you wouldn't be in a DARPA program that was aimed at doing chess. So is that a-- do you see that as a problem? RUSS TEDRAKE: I think it's a huge problem. But I still see a role for these kind of competitions as benchmarks. And I wouldn't do another one today. I mean, for me it was the right time to sort of see how our theory had gotten, try it on a much more complicated robot, benchmark where we are, get some new ideas going forward. It was perfect for me. But you can't set a research agenda. JOHN LEONARD: And they're dangerous for students. So one of our strongest students never got his PhD, because his wife is in program PhD program in biology and he did the DARPA challenge. And she finished her thesis. And he said, I don't want to live alone on the east coast while she starts her faculty position in California. So I'm out of here. And that's the sort of thing. STEFANIE TELLEX: I kind of made different decisions about that over my career. So when I was a post-doc at MIT, I really, really, really worked to avoid soldering time. I was fortunate. I kind of walked around. There's all these great robot robotic systems. And I would bolt language on, and get one paper. And bolt language on another way and get another paper. And you look to get a faculty position, you have to have this focused research agenda. And so I was focused on that. And it worked. I think it was a very productive time for me. But I really valued the past two years at Brown, where there's not as much other roboticists around there. So I've really been forced to broaden myself as a roboticist and spend a lot more time soldering, making this system for pick and place on Baxter. The first year I didn't hack at all, and the second year I started hacking on that system, the one that was doing the grasping with my student. It was the best decision I ever made. I learned so much about the abstractions. Because the problems that we needed to solve at the beginning, before I started hacking, I just didn't understand. The problems I thought we needed to solve were not the problems that we actually needed to solve to make the robot do something useful. And I don't think there's any way we could have gotten to that knowledge without hacking and trying to build it. GIORGIO METTA: In our case, we've been lucky, in a sense, that we had resources in terms of engineers that could do the soldering. So at the moment we still have about 25 people that are just doing the soldering. So it's just a large number. Yeah. PATRICK WINSTON: That would look like a battalion, something like that. JOHN LEONARD: Can I say something more generally? So there's a lot of claims in the media and sort of hyped fears about robots that take over the world, or sort of very strong AI. And partly, they sometimes they point to Moore's law as this evidence of great progress. But I would say that in robotics we're lacking high-performance commodity robot hardware that lets us make tremendous progress. And so things like Baxter are great because they're cheap and they're safe, and they're a step in that direction. But I think we're going to look back 20 years from now and say, how did we make any progress with the robots we had at the time? Like, we really need better robots that just get massively out there in the labs. RUSS TEDRAKE: But-- TONY PRESCOTT: I was going to echo that, because I think robotics is massively interdisciplinary. And you've maybe got people more towards control here slightly. What we're trying to do in Sheffield robotics is actually bringing in more of the other disciplines in engineering, but also science, social science. Everybody has a potential contribution to make. Certainly electronic engineering, mechanical engineering. Soft robotics, I think, depends very much on new materials, materials science. And then these things have different control challenges. But sometimes the control problem is really simplified if you have the right material substrates. So if you can solve Giorgio's problem, having a powerful actuator, then his problem of building iCub is much simplified. So I think we have to think of robotics as this large, multi-disciplinary enterprise. And if we're going to build robots that are useful, you have to pull in all this expertise. And we're interested in pulling the expertise in from social science as well. Because I think one of the major problems that we will face in AI and in robotics is kind of backlash, which is already happening. Do we really want these machines? And how are they going to change the world? Understanding what the impacts will be and trying to build in safeguards against the negative impact is something we should work on. PATRICK WINSTON: But Giorgio had on one of the slides that one of the reasons for doing all this was fun. And I wonder to what degree that is the motivation? Because all of you talked about how difficult the problems are, and some of them are like the ones you talked about, John, watching that policeman say, go ahead through the red light. Those seem not insurmountable, but very tough and sound like they would take five decades. So is the motivation largely that it's fun? RUSS TEDRAKE: That's a big part of it. I mean, so we've done some work on steady aerodynamics and the like, too, and I like-- so we made robotic birds. And I tried to make robotic birds like on a perch. And then we had a small side project where we tried to show that the exact same technology could help a wind turbine be more efficient. PATRICK WINSTON: Yeah RUSS TEDRAKE: And that's the important problem. I could have easily started off and done some of the same work by saying I was going to make wind turbines more efficient. I was going to study pitch control. I'd be very serious about that. But I did it the other way around. I wanted to try to make a robot bird. And I think the win-- not only do I get excited going in, try to make a bird fly for the first time instead of getting 2% more efficient on a wind turbine. I mean, I go in more excited, but I also-- I get to recruit the very best students in the world because of it. There's just so many good reasons to do that. Sometimes it makes me feel a little shallow because the wind turbine's way more important than a robotic bird. But that is-- the fun is the choice. PATRICK WINSTON: What about it, Giorgio? Do you do it-- or you have a huge group there. Somebody must be paid for all those people. Are they in expectation of applications in the near term? GIORGIO METTA: Sorry. PATRICK WINSTON: You have a huge group of people. GIORGIO METTA: Yeah. I mean, the group is mainly funded internally by IIT, which is public funding for large groups basically. And actually, the robotics program at IIT is even larger-- the group on the iCub is actually-- there are four PIs working on it, and collaborations with other people like with the IIT-MIT group also. But the overall robotics program at IIT's about 250 people, I would say. So they're certainly also part of the reason why we've been able to go for a complicated platform. There was one that actually participated in the DARPA robotics challenge. There's people doing quadrupeds and people doing robotics for rehabilitation. So there's various things. PATRICK WINSTON: So there must be princes and princesses of science back there somewhere who view this as a long-term investment that will have some-- GIORGIO METTA: It was in the scientific program in the Institute to invest in robotics. And while one day they may be looking at the results and see whether we've done a good job, desire to fire us all, whatever. Hey man, that might be the case. And I think it was-- unexpectedly IIT started in 2006. And the centrifuge program include robotics and all the hype about robotics that started in recent years, Google acquiring companies, this and that, I think in hindsight has been a good choice to be in robotics at that time. Just by sheer luck, probably. RUSS TEDRAKE: To be clear, I think we're having fun but solving all the right problems while-- I think we just sort of-- yeah. We lucked out, maybe, a little bit. But we found a way to have fun and solve the right problems. So I don't feel that we're-- GIORGIO METTA: I think it's a combination of fun and the challenge, so not solving trivial things just because it's fun, but a combination of the two, seeing something as an unsolved problem. STEFANIE TELLEX: So I try really hard to only work on things that are fun and to spend as little time as possible on things that are not fun. And I don't think of it as a shallow thing. I think of it as a kind of resource optimization thing, because I'm about 1,000 times more productive when I'm having fun than when I'm not having fun. So even if it was more serious or something, I would get so much less done that it's just not worth it. It's better to do the fun thing and work the long hours because it's fun. So for me it's just-- it's still obviously the right thing because so much more gets done that way, for me. PATRICK WINSTON: Well, to put another twist on this, if you were a DARPA program manager, what would you do for the next round of progress in robotics? Do have a sense of what ought to be next? Or maybe what the flaws in previous programs have been? STEFANIE TELLEX: So we've been talking to a DARPA program manager about what they should do next. And we got a seedling for a program to think about planning in really large state action spaces to enable-- the sort of middle part of my talk, we were talking about the dime problem, right? So we wanted a planner that could find actions like picking up a like small scale actions, but also large scale things like unload the truck or clean up the warehouse. And so we were-- because we thought that is what's needed to interpret natural language commands and interact with a person at the level of abstraction. So we have a seedling to work on that. JOHN LEONARD: So if I could clone myself, say I made four or five of my selves. One of them I would, if I were DARPA program manager, to do Google for the physical world. So think about having like an object-based understanding of things and people in the environment and places, and being able to do the equivalent of internet search, physical world search, just combining perception and then being able to go get objects. So the physical-- like, w get for the physical world. That's what I would like to do. TONY PRESCOTT: In the UK, I think-- so I don't know about DARPA. But the government made it one of their eight great technologies a few years ago, robotics and autonomous systems. And looking again, now, at the priorities and they're now looking, well, what are the disruptive technologies in robotics, again, is coming out as one of the things that they think is important. So in terms of potential economic and societal impact, I think it's huge. And so if US funding agencies aren't doing it-- PATRICK WINSTON: What do you see those applications as being? TONY PRESCOTT: I think-- well, the big one that interests me is assistive technology. In Europe, Japan, I think the US, we're faced with aging society issues. And I think assistive robotics in all sorts of ways are going to be-- PATRICK WINSTON: So you mean for home health care type of applications? TONY PRESCOTT: Home health care-- prosthetics is already a massive growth area. But robots-- I mean, my generation, I've looked at the statistics and the number of people that are going to be in the age group 80 plus is going to be 50% higher when I reach that age. And it's a huge burden on younger people to care for us. So I think independence in my own age, I think in my old age I would love to be supported by technology. You can do what you like with a computer, but you can't physically help somebody. And that's where robots are different. So that would be one of the things that excites me, and one of the reasons I'm interested in applications. I'm driven, I think, by the excitement of the research and building stuff. But I'm also motivated by the potential benefits of the applications we can make. PATRICK WINSTON: I suppose if they're good enough, we won't need dishwashers because they can do the dishes themselves. OK. So now we have a question from the audience, which if I may paraphrase, there have been a-- have there been examples-- I know you think of them all the time, Tony. But have there been examples where work on robotics in your respective activities have shed new light on a biological problem or inspired a biological inquiry that wouldn't have happened without the kind of stuff you do? RUSS TEDRAKE: I started off more as a biologist, I guess. I was in a computational neuroscience lab with Sebastian Seung. I tried to study a lot about how the brain works, how the motor system works, in the hopes that it would help me make better robots. PATRICK WINSTON: Oh, maybe pause there. Did it? RUSS TEDRAKE: Yeah, it didn't. So I don't use that stuff right now. I mean, maybe one day again. But our hardware is very different, our computational hardware is very different right now. I think there's sort of a race to understand intelligence, and maybe we'll converge again right now. But the things I write down for the robots today don't look anything like what I think the brain-- what I was learning about what the brain did back then. But that doesn't mean there's not tons of cross-pollination. So we have a great project with a biologist at Harvard, Andy Biewener. Andy has been studying maneuvering flight in birds. He's instrumenting birds flying through dense obstacles. We're trying to make UAVs fly through dense obstacles. We're exchanging capabilities and ideas and going back and forth. Just the algorithms that we have written helps him understand what birds are doing and vice versa. So there's tons of exchanges. But the code that I write to power of the robots today, I think, is not quite what the brain is doing, and nor should it be. PATRICK WINSTON: Any other thoughts on that? GIORGIO METTA: Well, we have experiments that I meant to actually present today, where we've been working with neuroscientists on trying to bring some of the principles from neuroscience to the robot construction. Let's say, not the physical robot, but the software. And I find always difficult to find the level of abstraction that actually motivate something from neuroscience and manages to show something important for computation. I think I only have one example, or two maybe overall. And it always happen not copying in details brain structure, but just taking an idea what information may be relevant for a certain task and trying to figure out solutions that use that information. In particular, a couple of things we've done had to do with the involvement of motor controlling information in perception. And that's something the sort of paid off, at least in the smallest experiments. Still, we can't compare with full-blown systems. Like we've done experiments in speech perception and that show to be over-performing systems that don't use motor information, but on limited settings. We don't know if we build the full speech recognition system what happens, whether we better or worse existing commercial methods or commercial systems. So it's still a long way to actually show that we managed to get something from the biological counter-part. Although maybe for the neuroscientists this explains something. Because where they didn't have a specific theory, at least we showed the advantages of that particular solution, that it's being used by the brain. TONY PRESCOTT: So I think there's-- we tend to forget in our history where our ideas came from. So for instance, reinforcement learning-- Demis Hassabis explained last night how he's using this to play Atari computer games and these amazing system he's developing. If you go back in the history of reinforcement learning, the key idea there came from two psychologists, Rescorla Wagner developing a theory of classical conditioning. And then that got picked up in machine learning in the 1980s, and it got really developed and hugely accelerated. But then there was crossover back into neuroscience with dopamine theory and so on. And ideas about hierarchical reinforcement learning have been developed that are partly brain inspired. So I think it's-- there is crossover, and sometimes we may lose track of how much crossover there is. PATRICK WINSTON: I have another comment from the audience, I see we are under some pressure to not drone on for the rest of the evening. So the comment is I think relevant to the last topic I wanted to bring up, which is the question of ethics in all of this. And the comment is why should we make robots that are good at operating, doing things in the household, take care of the elderly and so on, when the rest of AI is going hell-bent to put a lot of people out of work and people who could perhaps use those jobs. But in any event, there's been a lot of concern, perhaps spawned by some of the films like Ex Machina and so on, that robots will take over. And I don't think are going to take over in that sense very soon. But do you see-- do you worry-- do you think about any dangers of the kinds of technology you're working on in terms of economic dislocation or battlefield robots or anything of that sort that might come about as a consequence of what you do? RUSS TEDRAKE: I think it's inevitable. I think we shouldn't fear it, but we have to be conscious of it. So I mean, would you look back to the 1980s and avoid the invent of the personal computer because it was going to change the way people had to do work? I mean, of course you wouldn't. But at the same time that changed the way people had to do work. And it was painful for a big portion of the population, but ultimately it was good for society. I think robots will have the same sort of effect. It's going to raise the bar on what people are capable of doing. It's going to raise the bar on what people have to be successful in their jobs. And it might be painful, but I think it's super important for society to keep moving on it. PATRICK WINSTON: Why, again, is it super important? RUSS TEDRAKE: Because it's going to advance what we're capable of as a society. It's going to make us ultimately more productive. PATRICK WINSTON: Other thoughts? TONY PRESCOTT: I agree. I mean, I think the people that are worrying about jobs being taken by robots aren't the people that want to do those jobs. Because most of the jobs are ones that it's very hard to get anyone to do. They're low paid, they're unpleasant. And we're automating the dull and dreary aspects of human existence. And that gives the opportunity for people to have more fulfilling lives. Now, the problem isn't that we're doing this great work to get robots or machines to do these things for us. It's that, as a society, we're not thinking about how we adjust to that, how we make sure people will have fulfilling lives and will be supported materially to enjoy that prosperity. So I think it's disruptive in many ways, and it's going to be disruptive politically. And we're going to have to adapt. Because if you're not working, then you have to be supported to enjoy your life. And maybe that means a change in the political system. So those are questions perhaps not for us. But I think we maybe-- as the technologists, we have to be prepared to admit that what we're working on are really disrupted systems and they are going to have these large impacts. And people are waking up to that. And if we wave our hands and say, don't worry, I think we're not going to be taken seriously. PATRICK WINSTON: Other thoughts? JOHN LEONARD: I see how these are really important questions. And I see-- I have mixed emotions. I'm really torn. I came from a family that was affected by unemployment in the 1970s. So I feel like I'm very sympathetic to the potential for losing jobs. At CSAIL we've had this wonderful discussion with some economists at MIT the last few years, Frank Levy, David Autor, and Eric Brynjolfsson, Andy McAfee, and I've learned a lot from them. And I think that they vary in their views. I think I am more along the lines of someone like David Autor, who's an economist who thinks that we shouldn't fear too rapid a replacement of robots. If you look at the data, that the things that are-- I would say the things that are hard for robots are still hard. But on the other hand, I think, longer term, we do have to be mindful of as a society, that like, as Russ said, things like this are going to happen. I think that the short term introduction, if you look at, for example, Kiva and how they they've changed the way a warehouse works. I think replacing humans just completely with robots, like, say, for gardening or agriculture, some really hard things to do, because the problems are so hard. But if you rethink the task to have humans and robots working together, Kiva's a good example of how you actually can change things. And so that's where I think the short term is going to come from, is humans and robots working together. That's why I think HRI is such an important topic. PATRICK WINSTON: Well I don't know if you running for president, but be that is it may, do any of you have a one-minute closing statement you like to make? JOHN LEONARD: Well I'll go sort of the deep learning thing. I think in robotics we have this, potentially, a coming divide between the folks that believe more in that data-driven learning methods and more models. And I'm a believer more on the model-based side, that we don't have enough data and enough systems. But I do fear that we could be in a society-- for certain classes of problems, he or she who has the data may win, in terms of if Google or Facebook have just such massive amounts of data for certain problems that academics can't compete. So I do feel there's a place for the professor and the seven grad students and a couple of post-docs. But if you do you have to be careful in terms of problem selection, that you're not going right up against the sort of one of these data machine companies. RUSS TEDRAKE: I was going to say that I was looking at humans right now and trying to inform the robots, I wouldn't look at center-out reaching movements or nominal walking or things like this. I'd be pushing for the corner cases. I've been trying to really understand the performance of biological intelligence in the screw cases, in the cases where they didn't have a lot of prior data. They were once in a lifetime experiences. How did natural intelligence respond? That's, I think, a grand challenge for us on the computational intelligence side. And maybe there's a lot to learn. PATRICK WINSTON: And the grand challenge for me, to conclude all this, has to do with what it would really take to make a robot humanoid. And I've been thinking a lot about that recently in connection with self-awareness, understanding the story, having the robot understand the story of what's going on throughout the day, having it able to use previous experiences to guide its future experiences, and so on. So there's a lot to be done, that's for sure. And I'm sure we'll be working together as time goes on. Now I'd just like to thank the panelists and conclude the evening. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Tutorial_52_Tomer_Ullman_Church_Programming_Language_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. TOMER ULLMAN: So so far, we've talked about just examples of running things forward. I hope I've given you some examples of different procedures that you can run forward to get some interesting stuff, whether it's a mixture of Gaussians, whether it's sort of this mixture of Gaussian plus uniform, whether it's just flipping the coin. But the question is, OK, I've written down my forward model-- and hopefully, you saw that, even if it was a little bit broken, even if you didn't get the full details, it wasn't that hard to write it down, right? Someone could say, listen, I think the way the bombing works is this. You're going to put a Gaussian. You're going to put another Gaussian maybe, or maybe you're going to put three. I don't know how many. And you can write that down. And then you say, OK, I actually want to do inference on that. And that's when it becomes a little bit painful to do. And if only there was a way of running forward your model and, having written the forward direction, you can do inference. And it looks like we're talking about something completely different, but actually, it's not. We're basically going to run our models, but we're going to run our models in a way that it's going to do inference. So let's see. How would we possibly do that? So the basic syntax for any sort of query-- any sort of inference-- in Church is by stating the following procedure. You start out with saying query, where query is not itself a command. It's just there all sorts of queries. There's rejection query. There's mh-query-- Metropolis Hastings. There's explicit enumeration. But the point is, you would write down that particular query, then you would write down the generative model. This is a list of things-- the way that you think the world works. So here, for example, you would put in the London bombing example. You would put in the list of things like, I don't know, it's either uniform or not uniform, it's either Gaussian or not Gaussian. You're going to put some uncertainty in priors and things like that. Once you finish defining your forward model of how you think the world works, you're going to ask it a particular thing. The penultimate statement that you're going to give it is what we want to know. For example, in this particular case, suppose that, before I started, I said, I don't know if the bombing is targeted or not. I'm 50/50 either way. I don't have to be. But let's say I'm 50/50 either way. So I say, I'm going to flip a coin. It's either targeted or it's not targeted. And that's going to come up either true or false. And what you're going to basically say-- you're going to query on that. You want to say, did the coin come up true or false given the data? So the last thing-- the ultimate statement that you're going to write-- is basically the conditional statement. And the conditional statement is basically the thing that has to evaluate as true. The usual thing that we would write down there-- I'll give you some examples of that-- but what we would usually write is something like, given that the observed data matches the sample data from my model. So if you want to do something like the probability of a particular hypothesis given the data, this is the way that you would say, this is my data. This is what I know. And the way that it would know is, you're sort of running down a program, and you're constraining it to give you a sample that matches the actual thing that you see. In the London bombing example, what you would do is you would write something like query, a bunch of defines-- targeted bombing, random bombing, things like that-- I want to know-- is it targeted or random? How did the coin fall? And what you're going to do is, you're going to say, listen, run this model forward under the following condition. If I just run it forward, I would either get targeted or not. It would be 50/50-- under the condition that whatever this model samples has to match the actual data. So I'm going to run it forward, but the thing that it needs to evaluate as true is that the samples I got from the model are equal to the actual data that I got. Now once you do that-- once you define that particular thing-- what you've done is change the probability distribution that the generative model describes. Remember how we talked earlier-- I was sort of trying to hammer it home-- that anything that you write down as Church is actually a probability distribution. You write down the program and you run it an infinite number of times and you get some distribution. Your generative model describes a particular distribution. If you condition that model on something, you get a different distribution. And that different distribution is now what you're going to sample from. You're going to sample from the posterior. You have some prior, you condition it on some data, and you're going to sample from the posterior. And sampling from the posterior can be something like-- and we'll give it some examples, like, I know how the world works in terms of their objects, and I know how light works, and I know how vision works. I don't know what the particular objects in this world are. That's what I want to know. I condition on their retinal display being equal to something. And now, my posterior probability distribution is going to basically sample from, say, your face or these objects or these chairs. Well, the same thing could work, for example, if you're trying to query a sentence. You're trying to parse a sentence from sound, or you're trying to predict how the next step in a physics engine is going to work, or many, many, many, many, many different other things that you can find in probmods.org. So like I said, the "what we know" is the condition. And if you set the condition to true, that's a sampling from the generative model, because you're always going to evaluate as true. Now, how could you possibly implement this sort of magical procedure? So how could you take some probability distribution and change it into a different probability distribution that does what you want it to do? And there are many, many different ways of doing that. But the easiest way of doing that is by something called rejection query. How many of you know about rejection sampling? How many don't know about rejection sampling? OK. The way rejection sampling works is that I have some sort of distribution that I'm trying to sample from. And suppose that it's really hard to sample from that distribution exactly. So let's say that my distribution is this circle. And for whatever reason, it's really, really hard to sample from that circle. I don't want to try to define the probability distribution that describes this circle. It's really hard to sample from it. This is trivial, but there are probability distributions that are really hard to sample from. What do you do? You can construct a really simple distribution that you can sample from. Let's say that it's really, really simple for me to sample from a uniform square that encompasses the circle. So now, I have some probability distribution. There's a uniform distribution over the square that I can sample from. What does it mean I can sample from? It means each time I run the procedure I get some point in the square. But I don't want the points in the square. I want only points from the circle. So what I would do is basically sample from the square, and each time it falls outside the square, I'm going to say, throw that out. That's called rejection sampling. Because you sample from some sort of procedure that you know how to sample from, then you check that sample. And if that sample didn't meet your desiderata, you throw it away. And what you're left with is the circle-- the distribution that you're trying to get. So what is the simple thing and what is the hard thing in what we're describing so far? The simple thing is the generative model. It's relatively easy to sample from the generative model. We just wrote it down. So we know how to sample from it. But we're looking for something else. We're looking for some sort of different program. We're looking for some sort of setting of a program that would generate the data that I saw, not just the generative model that I wrote. The way that we would do that is, in rejection query, we would sample from the generative model. We would check, does that fit what we know? Suppose I just sampled something. And then I check, does that apply to the condition that I want? So let's say I have a particular world. It's a sort of a silly world. But let's just make sure it works given the last time. OK. So what I'm going to do is, I'm going to describe a world in which there is Legolas. Gimli, and Arwen. Anyone get Lord of the Rings references or something like that? OK. Each one of them is going to take out a particular number of orcs. And let's say that's my generative model is that I don't know how many each one of them took out. Let's say that they take anything between zero and 20. They're having a brawl. Each one of them is going to take out some number of orcs. So we're going to define the number of orcs that Legolas took out as some random integer-- 20. OK? Gimli's the same. Arwen's the same. And we're going to also define the total number of orcs that they took out as just plus each one of these things. So we're going to look at the pile of orcs that they took out in the end. That's my generative model. That's it. What I'm going to wonder about is, how many orcs did Gimli take out? Not knowing anything, how many orcs did Gimli take out? Should we just switch to kill or something? I feel bad for the orcs. But OK. How many orcs did Gimli take out? Well, we don't know. We just said we don't know. It's a random integer between zero and 20. It's anyone's guess. If I just ran this model forward, it would give me any number between zero and 20. If I ran it 1,000 times, I would get a uniform distribution over zero and 20. Now, I'm going to give you a condition. It's a simple condition. The total number of orcs that they all took out is greater than 45. Altogether, Gimli, Arwen, Legolas took out more than 45 orcs. Now, how many orcs do you think that Gimli took out? Now, the point is that you would somehow shift your distribution. This is a very simple problem. You could probably write it down on a notepad. But you're trying to do the posterior of the number of orcs that Gimli took out, given-- conditioned on-- the fact that all of them together took out more than 45. Is this making sense? OK. How would I write down that as a rejection query without using any syntax that you haven't seen already. Without using anything like query yet, I'm just going to use, basically, recursion to write down a rejection query for that. And later on, you can use a rejection query. But what I would do is, I would just say, here's a procedure that's going to give me back the number of orcs that Gimli took out conditioned on everyone taking out more than 45. So I write down a particular generative model. Like, I know that Legolas took out somewhere between zero and 20, Gimli took out somewhere zero and 20, Arwen took out somewhere between zero and 20, and they all took out the number of total orcs. Now I say this. If the total number of orcs is greater than 45, that's a good sample. So I'm going to go through this. I'm going to sample my program. I'm going to get, Gimli did-- I don't know-- 15. Legolas did 10. So now, say, Arwen did 20 or something like that. So now we got to 45. Is that a good world? Yes, we're over 45. Fine. Give me back whatever it was for Gimli. I don't even remember what it was. Give me that back. Suppose it didn't add up to 45. Try again. This is basically the circle and square example from before. The if statement is telling us, if this matches my condition, sample randomly from the world. Sample from the square. Sample from the generative model. If the thing that you got matches the condition that you want, give me back the sample. Give me back that answer. If it didn't, try again. And now, if we do this, and we repeat this procedure 1,000 times, then it's no longer uniform distribution. It's greater than five or whatever. It's going to be zero on that. Because if he took out zero, they're never going to get to 45, right? And it's probably going to be sort of skewed in this direction. So that's the posterior distribution on how many orcs Gimli took out conditioned on all of them taking out more than 45. That's amazing, you guys. You've just understood rejection query. You've just written down, in a few very simple lines of code, a sampling with rejection. And in fact, you can define all of conditioning using this thing. You don't have to get fancy with Metropolis Hastings and things like that if you're just trying to sort of prove things. If you're into computer science and things like that, you can just define conditioning using this. You're saying, I have some probability distribution. I'm trying to condition it. I get a different probability distribution-- the posterior. How well behaved is it? Can I write down? Things like that-- you can prove it. All the sort of things that you want to prove, you can prove using something like this construction. Why shouldn't you use that? So you can do that if you're into theoretical computer science. What is bad about rejection query? Does anyone know? Can you guess? Sorry? AUDIENCE: It's costly. TOMER ULLMAN: Costly in what sense? AUDIENCE: [INAUDIBLE] TOMER ULLMAN: Right. Exactly. Depending on the condition, it might be a very, very bad idea. Here, I could sort of do the condition, because I ran the model forward, and sometimes, I got over 45. So yeah, why is rejection query a particularly bad example? Because your condition might be really, really, really hard to satisfy. So if, for example, I change this to 59, what will happen now? I think because of the way I wrote this, it will never reach that. But thanks for that point. But let's change this to this. Don't run this, by the way, because it will never stop. Or it will take a long time. So now, it's only going to be fulfilled if each one of them took out 20 orcs, right? They would all need to take out 20 orcs for the total number of orcs to be equal to 60. When is that going to happen? It's going to happen 1 in 20 times 1 in 20 times 1 in 20. You're going to waste a lot of samples on something that's never going to happen. And it doesn't even matter that much. And you can easily look at this and you can sort of say, well, obviously, Gimli took out more than this, because I know how to program and I can figure it out. But oftentimes, you will find that you can't exactly say. You look at some convoluted program and you won't exactly know how this should look or whether it's easy or whether it's hard. But in that sense, rejection query is probably a bad idea. Another example of why it's a bad idea is something like precision. So let's see. Don't run this, because it'll take forever. Let's do an estimate of pi. And again, I was sort of going to give this as an exercise. But since we don't have a lot of time, here's one thing that you could do with rejection query, or you can do it with any sort of query. You could try to estimate pi. Literally, using that example that I just did, you could try to say, OK, sample from some square, only accept the Xs that are in the circle, and then sort of try to estimate how many samples you got in the circle out of the total number of samples you took. So run 1,000 samples, and see how many of those fall in the square. And if you run 1,000, you'll get three point something. You can try this. I did it as an exercise for you. You can see I sort of set up some of this syntax. If you want to try this out later, please do. You run 10,000 samples, it's like 3.1. I think if you do it 100,000, it'll probably do 3.14-- maybe not that much better. Like, 100,000 samples, seriously, to get 3.14-- we all know it's 3.1415. Sometimes, if you can use math, you should use math, in some cases. On the other hand-- this is, by the way, a reason why you should probably not use sampling in general when you can use math. We can solve it analytically-- if you're interested in precision and things like that. But suppose you're not interested in precision. I'm actually going to try and help that. So you can actually get a pretty good estimate from about 10 samples. If you just do 10 samples on this thing, you'll probably hit something like three as an estimate. You can do the histogram for where it will fall. Most of the samples-- like 70% of the samples-- will fall between 2.8 and 3.6. If that's what you care about, then that might be what all your vision system cares about, or different things that might require sampling. Then that's fine. And I'm not going to go too much into this, because, like I said, the dream of probabilistic programming is sort of to free you from thinking too much about the sampling. But those of you that are interested in sampling, that are interested in algorithmic learning and things like that, there's been a lot of research on exactly that. When are we OK with just taking one sample? We write down some sort of model and we see how well the model does by taking one sample, 10 samples. What's the precision that you can get? And does that precision match people trying to perform a similar task of estimating that thing? That's, again, like the sampling hypothesis-- not sampling hypothesis for neurons, sampling hypothesis for the way people answer questions. They sample one or two or a few number of points from their generative model-- not that many. And the claim is that you can sort of see that they get better with time, that they probably don't take that much if you give them more time to think. It looks a bit like a sampling procedure that takes more samples. And it seems like you can get away with quite a lot if you just do 10 samples or 100 samples for pi. And there's this SMBC cartoon that I quite like, which is like, why shouldn't physicists teach geometry? And they're like, well, you know, how do I remember the value of pi? It's quite simple. I look at my fingers and there's five of them, and that's about pi. What else could we do if we don't want to use rejection query, if we don't want to use rejection sampling? Suppose we don't. We probably don't. We could try to do exhaustive enumeration. If our model is small enough, we can just consider all the possibilities and explicitly score them. The other thing that we could do is something like Metropolis Hastings. How many of you are familiar with Metropolis Hastings? OK. Why don't we raise our hands to the degree that we are familiar with Metropolis Hastings, where here is really familiar, here is not familiar. OK. Metropolis Hastings-- I'm not going to go too much into the details. I'll just be doing it a disservice. But the way to think about it is to say, instead of just sampling at random from the entire space of things that could be really, really bad equally, I'm going to try and sample from something that I think is likely to be good. And the way I'm going to do that is, I'm going to construct, basically-- what's the best way to explain this? It's to say, I'm at a particular point in the space. I've gotten my sample. I already have it. What should I do now? Rejection sampling just says, well, just sample another one from the generative model and see if that works. That's a bad idea. What you should actually do is try to use the sample that you already have and how good it is as a sample to inform your next move. So now, what you're going to do is, you're in this particular point in space, and you're going to move to a different point in space that depends on the point that you are now. So for example, if you're in some two-dimensional space, and you're over here, you're not going to sample from this square. You're going to sample from a point next to it, let's say. That's your proposal. You sample according to your proposal distribution. Your proposal distribution tells you where you should sample next. So you're here in the space of all possible programs or your theory space. Metropolis Hastings is more than just programs-- anything. You're in the space of possible things that you're trying to sample from. You're here. You've got one sample. Now, you're sort of looking around, and you take another sample. You do that according to your proposal distribution. Your jump there. And you evaluate this point. And the way that you evaluate it is, you just look, how well does this thing fit the data? And we can get into that-- but the ways you sort of score your model according to the data. And you say, well, this one fits the data pretty well. How well does this one fit the data? Not so great. So I should probably move over there. I should move to that point in program space. I'm going to move over here. Now I sample another point. I go over here. How well does this do? Not as great, but I might still accept it. The point is, you're going to accept and reject new points, new executions of the program according to how they predict the data, according to how well they answer the condition. If the condition is a simple true-false, that just means that you have an absolute yes or no. But many of these conditions are going to be something like, well, it's good if it matches it. You get some score from the likelihood and the prior. So you can score this new point in program space and either accept or reject it. And this thing of moving around in program space and sampling according to some new proposal distribution and accepting or rejecting and moving around like that is a lot more efficient in most cases than rejection sampling. And in the limit, if you keep on doing this-- if you keep on walking around, walking around, taking samples, accepting or rejecting them depending on how well this new point in program states does-- what you'll end up with is the posterior distribution that you're trying to sample from. And I should say, what is a point in program space? It just means a program that I have completely evaluated. Like in the case of the London bombing, it would be, I have two Gaussians, and this one's center is here, and this one's center is here. I sort of walk through all the things that I can sample. I've gotten one particular run of the program. And then I try to move to somewhere else. Like, I might change the center of this Gaussian. Or I might say, well, you know what? Actually, there aren't two Gaussians. Let's run it again. Let's run it again. There was actually 10. So what I do, the way I change, the way I move around in program space is to go to a particular point along the tree of the evaluation and say, what if I change that? What would I end up with? I sort of re-sample. And I re-sample. I end up with some other program. I basically say, how good is that? Yes, no-- and I accept or reject that. As I said, I'm doing this a little bit of a disservice. But if you keep that mental image in your head of something bouncing around and accepting or rejecting new proposals according to how well they do compared to one another in a sort of pairwise fashion, you won't go far wrong. What this also tells you is that it's a little bit important where you start out. So if I start out in this particular point in program space, or in any space, and I look locally, I might accept or reject and things like that, but, actually, the really good stuff is over here. The high probability stuff is over here. But it'll take me a long time to get to that. Because I started out here. Does everyone sort of understand what I mean when I say "here?" So supposing you have a random square, and you're trying to sample from a probability distribution over the square, and in the corner is this much less probability. But you started on the corner for whatever reason. And now, you're trying to figure out how to get to those good samples in the center, but you can only move locally. You will eventually get to the center. If you run this on long enough, you will eventually get to those good probability spaces. But it depends a lot on where you started out. And that just means that, oftentimes, in Metropolis Hastings and MCMC and things like that, you hear about burn-in, which is just to say, we want to get rid of the initial x samples, because those samples are going to be biased. They're going to depend on where we started out. And the hope is that, after x samples, we're no longer biased. We no longer remember where we started from. We're just sort of sampling around in the space. So the way Metropolis Hastings works-- and this is the backbone of inference in Church-- is, you would write down something like mh-query, then you would write down the number of samples that you want from your posterior distribution. You would write down the lag. The lag is just to say, forget every x steps. If you want to talk about this, we can talk about it later. It's not particularly interesting. These are just two numbers. And if you make them bigger, you will get more samples. You will get a better estimate of your posterior distribution. You write down some generative model, you write down what you want to know, and you write down what you actually know. And you do a random walk in the program evaluation space. Like Josh said, what's nice about this is that it's very, very, very, very general. This will work on any program, more or less, defined correctly. You need to make some decisions, like how many samples you want to take, what the lag is, what the burn-in is. You can do all sorts of fanciness. You can do particle filtering. You could run several chains. You can do temperate annealing. You can do lots of different things that I just said and might not make a lot of sense. But the point is that this procedure could be made more or less fancy. One of the problems with it is, it takes a while. Like Josh said, there's a lot of better algorithms. If you know what your representation is, and it's something like a feedforward neural network, you probably shouldn't do Metropolis Hastings on it. There's a lot of very fast things that you could do, like gradient descent, and you don't need to wait around for this thing to happen. Let's see. So I think we have enough time to give you some examples of inference. Let's walk through some coin testing examples, a bit of intuitive physics, and a little bit of social reasoning. So suppose that I took a coin and I flipped it, and it came up heads. What do you think? Is this coin weird? No. It's OK to say no. What if I flipped and it got heads five times in a row. Do people think it's weird? Raise your hand to the degree that it's weird. Is it weird that it's five? If I flipped it 10 times and it came up heads, raise your hands to the degree that it's weird. 15 times in a row, heads? 20 times in a row, heads? OK, we more or less asymptoted somewhere between 10 and 15, which is exactly right. And the point here is something like, we have a particular prior over what we think the weight of the coin is. We're pretty sure that the coin is not biased. We're pretty sure that the coin is supposed to be equal weighted. But then we get more and more evidence, and we sort of figure out that, wait a minute, no, this might be a trick coin. This might be weird. And the point of the first example is to show you the basics of conditioning and inference and things like that using the coin example. So what we would do is, we would take in-- and again, as I said, I'm slightly going to rush this. But I'll still try to explain it. We're going to define some observed data. Suppose our observed data is that we got five heads in a row. Is that five? Yes. It's five. And now we're going to say, OK, we're going to define something. We're going to define an inference procedure. We're going to call it samples. The way it's going to work is that it's going to give us 1,000 samples back. We said we're going to write mh-query, the number of samples, and now we're going to define a generative model. We're going to end up with the thing that we're actually interested in under a certain condition. So what's our model for the world-- for this simple world? Let's say my prior on this being a fair coin is very high. One means I'm absolutely sure it's a fair coin. Zero means it's not a fair coin. And we're going to put in a big prior on it being a fair coin. It's going to be 0.999. And then we're going to basically say, somewhere in the beginning, the way I think the world works is that you're going to pull up a new coin off the mint, and you're going to say, is it a fair coin or not? 999 out of 1,000 are fair. One is not. So we're basically saying this. We're going to say, is it a fair coin? And this is going to come up true 999 times out of 1,000. And it's going to come up false one time out of 1,000. Because what we're basically doing here is just flipping a coin. We're flipping a coin with a bias of this prior. So what we have here is this thing is going to come up this thing-- fair coin. It's going to come up without any knowledge, without any data, without seeing anything. Just, you took a coin off the mint. 999 times out of 1,000, you think it's going to be fair, without seeing any data. Now you're going to create a coin. The coin is going to take in some weight. There's this procedure that you can flip and actually get heads or tails. And the way this coin is going to work is that, if it's fair, it's going to have a weight of 0.5. If it's unfair-- and this is a very simple example-- it's going to have a weight of 0.95. So the fair coin comes up heads or tails equally likely. The unfair coin-- the trick coin-- comes up heads almost all the time. And again, you can define a different hypothesis. It doesn't really matter. But the point is, I defined some sort of coin. And now, I define some sort of hypothesized data. Well, the hypothesized data is just, I sample from this coin that I just made. And what I want to know is, is this a fair coin? Yes or no. The last statement is conditioned on this sample data, this illusory data, this imagined data being equal to the observed data. So now, you have some sort of program, and you're trying to figure out, did this come out to be a fair coin or not? When I did this here, if I didn't condition on anything, then it, 999 times out of 1,000, should give me back, yes, this is a fair coin. But I've now conditioned on some data. And the data is that it came up five times heads in a row. And if you do a histogram for that, then you'll find that it's still very likely to be a fair coin. Because the prior is so strong. But now, we can change. We can change the data to add a few more heads here. And I think we're now more or less at 10. And it's starting to be like, well, is it a fair coin-- yes or no? Well, I'm like 60% sure that it's a fair coin now. What if I flipped it-- I don't know-- like 20 times or something like that, and I came up with that? And it's basically saying that it's 100% not a fair coin. This is false. It's basically saying, is it a fair coin? No. Even though the prior is strong-- even though if I just ran my generative model without any conditions, usually, it would be a fair coin, there is no way that I would run my generative model and sample it 20 times-- that coin flip-- and it would come up heads all the time, when the alternative is that it's going to come up heads. So now, you can sort of play around with this coin. This is a nice example. And by the way, Josh, some version of this-- not much more complicated than that-- was a cognition paper a few years ago, where, basically, you gave people different sequences of coins, different sequences of numbers, and you started to see, where does it become weird? What hypothesis did they think is likely? And all that they did was, they gave it a more interesting hypothesis space. Instead of saying it's either a fair coin or a coin that's 95% heads, they gave you a more general hypothesis space. Like, it could be a coin that comes mostly heads, mostly tails. Maybe it's a coin that does heads, tails, heads, tails, heads, tails, heads, tails. Or you can define the sort of alternative procedure, but that, you would change over here. The rest of this would stay more or less the same. That's a very simple way of getting some hypothesis tested. Let's do some very simple, intuitive physics. Let's try something like this. So in Church, you can basically animate physics forward. I guess I hadn't counted on, when it's a full screen, it sort of does that thing. But what you can do is, you can define, basically, a two-dimensional world where you say, listen, here's a two-dimensional world. It's this big. I'm going to add some shapes. I'm going to put the shapes in random locations. I'm going to set gravity to something. And then I'm going to run it forward. What happens? AUDIENCE: Command minus. TOMER ULLMAN: Sorry? AUDIENCE: Command minus. TOMER ULLMAN: Command minus for running? OK. AUDIENCE: [INAUDIBLE] TOMER ULLMAN: Oh, of course. Thank you. Trivial. Thank you. Is that better, everybody? Yeah. OK. So I have this thing, and I'm going to hit simulate to try and see what happens. So I basically have these things. I guess, in this case, I didn't put any randomness on where they actually are. But you could easily imagine putting some randomness on where these blocks start out, where these blocks are. But the point is, this is just running it forward in physics. Now what is that good for? Well, you could, for example, define a tower. A tower is just defining a bunch of blocks, one on top of the other. And now you can run it forward, see what happens. What do you think? Is this going to fall or not? Yes? No? Let's see. And you can simulate that forward. It fell. OK. Very nice. Now what can we do with that? And as I said, I'm going to zoom through this. What we can define is basically a bunch of towers like this. Each one of them is just saying, the blocks are like that. And all I'm going to do is slightly perturb them and see if they fall down. And I'm going to do that 1,000 times for each tower. And I'm going to do that for a bunch of towers. Some of them are stable. Some of them are not stable. So you can write down some Church code, which is basically, this is my world. There's some ground. The ground is just a rectangle. Here's a tower-- a stable tower. The stable tower-- all that it means is that I'm creating some blocks in this particular order. Here's an almost stable tower. It's blocks in this order. Here's an unstable tower. It's blocks in that order. And now, what I'm going to do is, I'm going to run this tower many times, and I'm going to count up the number of times that it actually fell. And if you do that, you'll see that the stable tower didn't fall down. This is just saying, like, did it fall down-- false, true? This one didn't fall down any of the time. This one fell down some of the time. This one fell down all the time. It's a toy example. But it's actually a toy example of a very nice and interesting paper that came out very recently and shows something deep about intuitive physics. We can ask how hard it was to implement this thing. This is an implementation of liquid physics in not so much Church as webPPL. And what they were trying to do here is to sort of say, well, this is a bunch of water. Physics is frozen right now. Imagine that this is a big glob of water. This is a cup over here. And this is some barrier. So if the water falls down on the barrier, it's going to go every which way. So let's see. If we run it, one of the questions that we can ask here-- and this has sort of been an example-- what I'm trying to show you here is that this is an active area of research. Even though it's sort of like Church-- it's 2D physics. It's simple. But people have been doing what they've been porting in as a liquid physics thing into Church. That took them a little bit of time. You probably want to talk to people who know what they're doing in that. Save yourself some time and talk to people in a Goodman's group. But what they did is, they ported it in a liquid physics implementation into Church. And they sort of said, OK, now we have some liquid physics and we can try to ask some questions. Like, suppose that this glob is going to fall down. We set it down over here and it's going to fall down. Where should we put this block in order to get as much of the liquid into that cup? That's an interesting question. It shows something about intuitive physics of liquids and things like that, more than just objects. So the way that you would do that is, even in a simple world like this, you would basically say, fine, put this thing somewhere, randomly uniform, conditioned on getting as much water into this thing as possible. And then try to figure out where this block should go. So you start out uniform, condition on as much water as possible. You'll get some posterior distribution of where to place this block. And just to show you what that looks like, let's try to-- so suppose that we actually tried to run this. So in this case, you don't want the block to go there, for example, right? Because most of the water is going to slosh over there. What if we put it over there? It's a little bit better. That's not so great. You could run it many, many different times. At this point, I hope most of you, even if you don't quite know what you're doing, you can see how you would go about writing the program to figure this out. You would write down the physics world, assume most of that is taken care of for you. All you need to do is sort of figure out where to place this block, put some uniform distribution in this area, condition on most of the water landing here, and then just sample, and figure out where this thing should be in the world. And that's pretty cool. So here's another example. That was intuitive physics. Let's move on to intuitive psychology, which Josh was sort of getting out at the end of his lecture. And here's a very, very simple question, which is something like, suppose that you see an agent-- this guy with googly eyes. Can people see him from way down there? There's a guy with googly eyes. It doesn't matter. It's me. I'm right here. There's a banana over there. There's an apple over here. So there's the banana over there, apple over there, and I start walking over here. And now, someone asks you, why did Tomer go over there? And you say, well, I guess he wanted the banana. And you say, oh, but you don't have access to his goals. How do you know he wanted the banana? And you say, well, because he went to the banana. Well, you're just being circular. How would you actually solve a task like this? It's sort of trivial. And you could solve it through something like cues. You could say something like, well, the thing that you approach is your goal. But another thing that you could do is, you could say, well, I assume that the way Tomer works is that he has goals. I don't know what his goals are. But I assume he has goals. And I assume that he can plan to reach those goals in some sort of semi-efficient manner. And he has some beliefs about the world. And he's going to carry out some sort of planning procedure in order to get to his goals. And if Tomer wanted the banana, the action he should take is to do this. It would be very unlikely for him to do this. If he wanted the apple, he would do that, not that. So you could sort of use this planning procedure to set the knobs on your procedure. Think of it like a generative model. Your generative model is something that goes from goals or utilities and beliefs to something like actions. And you would define the goals. Let's say you don't know what my goals are, so you place some distribution over them. And then you get to see my actions. And you basically try to say, well, in program space, what would be the setting of the goals of Tomer, such that it would have produced the observed actions. And if you write down a model for that, you'll find that if I set the goal for Tomer as banana, I'll get the observed action, which is, he walked towards the banana. Similarly, for belief, if you know something like, you know there are two boxes here. You don't know what's inside them. You know it's either a banana or an apple. And you know that I really love bananas and I hate apples. And you see me walking towards this box. You can infer that, ah, he probably thought that was a banana inside, or he knew there was a banana inside. And again, if you had some sort of planning procedure, you would say, OK, it would make sense for me to set his belief to be banana, because the outcome of that, if I run the model forward with those settings, would be for him to walk in that direction. Now let me show you just one example of what that sort of model would look like. So this would be under intuitive psychology. Those of you who are interested in sort of inference over inference and, how do agents reason about other agents, or goal inference or things like that, you might want to take a look at this section. And this is sort of super simple. There's no probabilities, exactly, in the sense of, it's going to be either this goal or that goal. It's obviously something that can be modified if you want to. Does everyone more or less see what's going on here? Let me make that a bit bigger. What I tried to write down is a model in which someone went for an apple, and you're trying to figure out, why did he go for that? Yes. Sorry. It really should be over here. Now, let me start out, actually, with something like planning. Let's write down the forward model before we do inference. Before we do inference, let's write down how we think the world works. The thing that I said-- the way the world works is that Tomer has some goals and some beliefs. And given his goals, he'll take some action to achieve his goals. Let's write down that part. That's the forward part. If we can write down the forward part, the inference part comes for free. We just put that in an mh-query and say, what's the goal that made the observed thing happen? So what we would do is, we would write down something like, what action should Tomer take? Choose an action. It's a procedure. It's a procedure that takes on a particular goal. Here, it's a particular condition that I can satisfy. But it could be a utility. It could be anything. But it's, what action should I take, given a particular goal, given how I think the world works? That's a transition function. If I take this action, what will happen? I need to know that if I go left, from your perspective, I'll get to the banana. That's the transition function for the world. And I need some initial state. And now, I sample some action. I do an action at random. I either go left or go right. I sample it from some action prior. Suppose it's completely uniform. Define action. It's simple action prior. And suppose my prior is go left, go right, with equal probability. Everyone with me so far? We're trying to get a procedure that will give us an action. What action should I take? Imagine you took an action. It doesn't matter which one. Now, what action did you end up with conditioned on that action getting you to your goal? This is a rejection query. I'm trying to sample an action conditioned on that action getting me to my goal. So let's say I sample the action. I hypothesize that I go that way. Did I satisfy my goal? No. I ended up with the apple. Do it again. Run it again. Run it again. Now I sample the action "go here." Did I end up with a banana? Yes. OK. So what action did I take? I went, from your perspective, left. Return that. We've just written down a procedure for planning. And it can be made much more complex than that in a few short steps. By complex, I don't mean that it's hard for you to follow. I mean that it can take in multiple worlds, multiple steps, utilities, probabilities, things like that. And it will spit out a sequence of actions for you to go from x to y to get to your goal. And it's written as planning as inference. Now, there are many different types of planning procedures. You could write down Markov decision planning processes. You could write down rapid random trees. I'm just throwing out names there for those of you who are interested in these things. There's lots of ways of doing planning. This is one particular way of doing planning. You could have done many different ways. But the point is that we assume you can even sort of wrap this up in something. Like I, as the observer-- I as you-- don't need to know exactly how Tomer works. I just need to know that there is some procedure such that if I put into it a goal, and somehow, how the world works, it will spit out some rational action. It's preferable that I have some idea of how it works. It doesn't need to be the right one. It doesn't need to be the one that I actually use. But you need to have some sort of sense that I am planning somehow. This is one way to plan. And now, this is just showing you. I put some uniform prior on the action prior. I either go left or right. The transition function of the world is such that if you go left, you get an apple. If you go right, you get a banana. So that's from my perspective, I guess. If you do anything else, you get nothing. And then you sort of just say, my goal is, did I get to the apple, let say, or did I get to the banana? I put that into the choose action and I'll end up going left. Because my goal was to get to the apple. The apple was on the left. I'm going to choose an action in order to go to the left. This whole thing is just to show you that, in fact, this works. If you sample it forward, it will give you the right action. You can now wrap up this whole thing in something that does goal inference, that doesn't know that my goal was this, that puts a uniform prayer on this thing, and then runs forward many, many different samples and comes to the conclusion that it must have been the apple, because he went left. And again, this example is fully written out for you over here, as well as the belief inference. This is an example of implicature. How many of you know what implicature means, like Gricean implicature? It's the sort of thing where someone tells me, hey, are you going to the party tonight? And I say, I'm washing my hair. Or you say something like, how good of a lecturer was John? And I say, well, he was speaking English. I'm not exactly telling you he was a bad lecturer. But if he was a good one. I would say it. The fact that I didn't say-- the fact that I chose to say something else-- implies that he probably wasn't a good lecturer. And this sort of happens a lot. And that it works is that-- and this happens a lot in language games, social games, reasoning about reasoning-- I'm the speaker. You're the listener. I have some model of you. You have some model of me. I know that you know that I would have said he was a good lecturer. I know that you know that. If I wanted to, I would have said that. And I'm not. So I know that you know that I know that. And it works out such that you realize that he's not a good lecturer. And that sounds sort of convoluted. But it's actually not that bad. And I want to show you an example of how that works. And this particular example is based on the "some not all" example. So this is the sort of thing like, I'm a TA in a class and someone asks me, how did the students do on the test? And I say, some of the students passed the test. Do you think I mean, all the students passed the test? No. Now, why not? Because some, in a sense, also means all. All the students is true if some is something like a logical thing that means greater than five or greater than zero or greater than one-- whatever it is. It can include all. But if it was all, I would have said all. And if it was one, I would have said one. So people are likely to infer from me saying some, that I probably mean-- if there's 100 students and I say some, they can give you a distribution over what I mean by that. And that distribution depends on the alternatives-- the alternative words I could have used, which they know I didn't. But they know I could have. There's a nice example of scaler implicature and how it would work in probmods. What I want to show you is a slightly different example, which is, again, the London bombing example. But the way it would work is something like this. So here's the background for implicature. I came up with this yesterday. I'm not quite sure it'll work. But we'll see. Imagine that things work like this. The city of London is being bombed. Again, sorry for the slightly dire things. The city of London is being bombed, and there are three places it could be bombed. Again, it's this uniform square. It could be bombed in the blue part-- anywhere here. If it's bombed there in the blue part, I would say it was bombed outside London. That means outside London. It doesn't include these things. It's just outside London. If it was bombed anywhere in this red square-- so imagine that this square is something like-- I don't know-- zero to two, and over here, it's zero to one. Anywhere in the red square is called London. If a bomb fell there, I would say, a bomb during the blitz dropped on London. If someone asked me, where did the bomb fall, I would say, in London. But that includes this whole thing. It also includes Big Ben. Big Ben is in London. Maybe some of you can see what I'm getting at. So if it fell on Big Ben, I could say it fell on Big Ben. I can say it fell in London, because that's also true. If it fell here, I would say it fell in London. If it fell here, I would say it fell outside London. Now, there's a general, and a staff sergeant walks up to him, and he says, a bomb fell on London during the blitz. And the general says, where did it fall? And he says, it fell in London. And the general says, OK. Then he looks outside his window and he says, good god, it hit Big Ben! And he says, yes, I said it fell in London. That's very weird. We don't expect people to do that. And Gricean implicature says we shouldn't do that. But Grice said it in a way that's like, you should give the maximal amount of helpful information and not hold out other things. How would that fall out of a particular model? Well, the way it would fall out is something like this. And again, I'll show you the code for that. It's something like, there's the speaker. The speaker is the staff sergeant. He could choose one of three words-- outside London, in London, dropped on Big Ben. Let's say Ben, London, outside-- something like that. He could choose one of three. Now, the fact that he decided to say London could include Big Ben. But if it was Ben, he could have also said Ben. He didn't, which implies that it probably fell over here. So the last model that I wanted to show you was exactly that. You sort of say, listen, there's some prior. The bombs sort of fall anywhere. And there's some distance here. And let's say you start out with just random gibberish. You can say either Ben or outside or inside. It doesn't matter. Regardless of what the world actually was, this is just defining what each one of these words mean. To hit London means to be inside that small square I said. To hit Ben means to be inside that smaller square inside London. To hit outside means to hit outside. That's what they mean. It just gives you back a true or false on a particular point in a two-dimensional space. Is it true or is it false that this happened? Now, you have a speaker and a listener model. And the way that the speaker works is that he has a particular state in mind. Like, he's looking at the state of the world. The bomb fell here. And he needs to communicate something. And he's reasoning about the listener to a particular depth. What he's going to do is he's going to choose a word randomly, because our prior is random. Kind of like before, we chose an action at random, and we saw if it worked, he's going to use a word at random. And he's going to choose the word such that it's going to cause the right state in the listener. So he needs a model of the listener. What's the listener? The listener is someone who takes in a word and tries to figure out the state. He doesn't know what happened. Where did the bomb fall? Someone gives him a word. And he's trying to figure out the state. So he's drawing a state from the prior. And this prior is, it could be anywhere. It could be anywhere in the square. Where did it actually fall? Well, it fell here, let's say, given that I got this particular word. But this word was generated by a speaker, which I need a model of. So there's this model of the speaker understanding the listener, and a model of the listener understanding the speaker up to a particular depth. And they need to bottom out at some point. And it's not that hard. I mean, it takes some time to wrap your head around it. But it's written about eight lines of code. And that's why I said that Church-- remember that caveat I gave you of, it's not a toy language. It's under development. But it's actually been doing some pretty interesting stuff. This is one of those things. These sort of language games are really hard to write in many other models-- really hard to write. And here, it's kind of trivial. You can sort of see where to play with it. I came up with this example yesterday. And I asked Andreas, which is my go-to guy for these things, and he thought it was interesting. So you would run that. And let's say that the speaker said it hit London. What should the listener understand. Where is this distribution over it? And I've sort of just sampled from it. I've sampled from his distribution over where he thinks it fell. And I just did a few samples, but you can sort of notice the suspicious gap over here. And if I sample 100 points, you'll notice that it's going to be in London-- so between one and one. This might take it a minute. But what you'll end up seeing is that it's probably anywhere in London. If someone said to you, it fell in London, then you infer that it's anywhere in London except Big Ben. Because if it was Big Ben, you would have said Big Ben. It's not perfect. I mean, there are some samples that get there. It's actually some shifting distribution. But yeah. If you took some heat map of this thing, what you would end up with is that there's some sort of suspicious emptiness over here. In this case, there's also a bit of an emptiness over there. But in the limit, you'll get that. So we did plan B, which was to zoom through a few things very quickly. I'm sure you guys didn't fully grok the details. And that's OK. What I wanted to do with plan B was to give you a taste of what is possible and how you would go about writing models. The important things to remember here is that probabilistic programs are great tools for capturing lots of rich structure-- anything from physics to psychology to language games to grammar to vision. Church is a particularly useful language for teaching yourselves about these things. There's a lot of different models that you can play with on probmods.org. You can write down a generative model very easily to describe how you think the world works, and then you can put that in an inference engine and try to figure out what you actually saw. OK. So thank you. [APPLAUSE] |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Tutorial_31_Lorenzo_Rosasco_Machine_Learning_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at OCW.mit.edu. LORENZO ROSASCO: I'm Lorenzo Rosasco. This is going to be a couple of hours plus of basic machine learning. OK. And I want to emphasize a bit, the word, "basic." Because really I tried to just stick to the essentials, or things that I would think of essentials to just start. Suppose that you have zero knowledge of machine learning and you just want to start from zero. OK. So if you already had classes in machine learning, you might find this a little bit boring or at least kind of rehearsing things that you already know. The idea of looking at machine learning these days is coming from at least two different perspectives. The first one is for those of you, probably most of that are interested to develop intelligent systems in a very broad sense. What happened in the last few years is that there's been a kind of data-driven revolution where systems that are trained rather than programmed start to be the key engines to solve tasks. And here, there are just some pictures that are probably outdated, like robotics. You know, we have Siri on our phone. We hear about self-driving cars. In all these systems, one key engine is providing data to the system to essentially try to learn how to solve the task. And so one idea of this class is to try to see what does it mean to learn? And the moment that you start to use data to solve complex tasks, then there is a natural connection with what today is called data science, which is somewhat a rapid [INAUDIBLE] renovated version of what we used to call just statistics. So basically, we start to have tons of data of all kinds. They are very easy to collect, and we are starving for knowledge and trying to extract information from these data. And as it turns out, many of the techniques that are used to develop intelligent systems are the same very technique that you can use to try to extract relevant information patterns, data, from your data. So what we want to do today is try to see a bit what's in the middle. What is the set of techniques that allows you, indeed, to go from data to knowledge or to acquiring ability to solve tasks. Machine learning is huge these days, and there are tons of possible applications. There has been theory developed in the last 20, 30 years that brought the field to a certain level of maturity from a mathematical point of view. There have been tons and tons and tons of algorithms developed. OK. So in three hours, there is no way I could give you even just a little view of what machine learning is these days. So what I did is pretty much this. I don't know if you've ever done this, but you used to do the mixtape, and you try to pick the songs that you would bring with yourself on a desert island. That's kind of the way I thought about what to put in this one [INAUDIBLE] lights that we're going to show in a minute. So basically, I thought, what are those three, four, five learning algorithms that you should know, OK, if you know nothing about machine learning. And this is more or less at least one part. Of course there are a few songs that stayed out of the compilation, but this is like one selection. OK. So as such, we're going to start, as I said-- whoop-- simple. And the idea is that this morning you're going to see a few algorithms. And I picked algorithms that are relatively simple from a computational point of view. So the math level is going to be pretty basic. OK. I think I'm going to use some linear algebra at some point and maybe some calculus, but that's about it. So most of the idea here is to emphasize conceptual ideas, the concepts. And then today, afternoon, there's going to be, basically labs where you sit and you just pick these kind of algorithms and use them, so you immediately see, what does it mean? OK. So at the end of the day, you should have reasonable knowledge about whatever you're seeing this morning. So this is how the class is structured. It's divided in parts plus the lab. So the first part, what we want to do is start from probably the simplest learning algorithm you can think of to try to emphasize, and use that as an excuse to introduce the idea of bias-variance, trade-off, which, to me, is probably either the, or one of the most fundamental concepts in statistics and machine learning, which is this idea that you're going to see in a few minutes in more detail. But it's essentially the idea that you never have enough data. OK. And the game here is not about describing the data that you have today, as much as using the data you have today as a basis of knowledge to describe data you're going to get tomorrow. So there is this inherent trade-off between what you have at disposal and what would you like to predict. And then, essentially it turns out that you have to somewhat decide how much you want to trust the data, and how much you want to somewhat throw away, or regularize, as they say, smooth out the information in your data, because you think that it's actually an accident. It's just because you saw data with aspects today that are not really reflective of the phenomenon that produced them. But it's just because I saw 10 points rather than 100. The basic idea here is essentially the law of large numbers. When you toss a coin, you might find out that if you toss it just 10 times, it looks like it's not a fair coin, but if you go for 100, or 1,000, you start to see that it converts to 50-50. OK. So that's kind of what's going on here. So the idea is that you want to use some kind of induction principle that tells you how much you can trust the data. Moving on from this basic class of algorithms, we're going to consider so-called regularization techniques. I use regularization in a very broad sentence. And here we're going to concentrate on least squares essentially because A, it's simple, and it just reduces to linear algebra. And so you don't have to know anything about convex optimization or any other kind of fancy optimization techniques. And B, because it's relatively simple to move from linear models to non-parametric non-linear models using kernels. OK. And kernels are a big field with a lot of math, but you're just going to look more at the recipe to move from simple models to complicated models. So finally, the last part, we're going to move a bit away from pure prediction. So basically these first two parts are about prediction, or what is called supervised learning. And here we're going to move a bit away from prediction and we're going to ask questions more related to, you have data, and you want to know, what are the important sectors in your data? So the one key word here is interoperability. You want to have some form of interoperability of the data at hand. You would like to know, not only how you can make good predictions, but what are the important sectors. So you not only want to do good prediction, but you want to know how you make good prediction. What is the important information to actually get good prediction. And so, in this last part we're going to take a peek into this. And as I said, the afternoon is basically going to be a practical session. If it's all MATLAB I think there is some quick-- if you have never seen MATLAB before, you can play around with just a little bit. But it's very easy and then you've got a few different proposals I think, of things you can do. And you can pick, depending on what you already know and what you can try, you can start from that and be more or less fancy. OK. So it goes without saying, stop me. I mean, the more we interact, the better it is. So the first part, as I said, the idea is to use so-called local methods as an excuse to understand it by experience. OK. So we're going to introduce the simplest algorithm you can think of, and we're going to use it to understand a much deeper concept. So first of all, let's just put down our setup. The idea is that we are-- so how many of you had a machine learning class before? All right. So, you won't be too bored. The idea is we want to do supervised learning. So in supervised learning there is an input and an output. And these inputs and outputs are somewhat related. And I'll be more precise in a minute. But the idea is that you want to learn this input-output relationship. And all you have at disposal are sets of inputs and outputs. OK. So x here is an input, and y is the output. f is a functional relation between the input and the output. All you have in this puzzle are these couples, OK. So I give an input, and then what's the corresponding output? I give another input and I know what's the corresponding output. But I don't give you all of them. You just have n, OK. n is the number of points, and you call this a training set, because it will be the basis of knowledge in which you can try to train a machine to estimate this functional relationship. OK. And the key point here is that, on the one hand, you want to describe these data. So you want to get a functional relationship that works well that, if you get the next one to give you an f(x1), which is close to y1 and so on. And f(x2), which is close to y2. But more importantly, you want an f, that given a new point that was not here, will give you an output, which is a good estimate of the true output to correspond to that input. OK. This is the most important thing of the setup. OK. The ideal, so-called generalization, if you want prediction. If you want to really do inference. You don't want to do descriptive statistics. You really want to do inferential statistics. So this is just very, very simple example, but just to start to have something in mind. Suppose that you have-- well, it's just like a toy version of the face recognition system we have on our phones. You know that when you take a picture, you start-- AUDIENCE: Sorry. LORENZO ROSASCO: They really weren't talking. You have something like this. You have a little square appearing around a face sometimes. It means that basically the system is actually going inside the image and recognizing faces. OK. So the idea is a bit more complicated than this. But a toy version of this algorithm is, you have an image like this. OK. The image you think of as a matrix of numbers. Now this is color, but imagine it's black and white, OK. Then it would just contain a number, which is the pixel value with the light intensity of that pixel. And you just have this array. And then if you want you can brutalize it with and just unroll the matrix into a long vector. OK. That gives one vector. So p here would be what? The number of? Just the number of pixels. OK. So I take this image and I unroll it. I take another image and I unroll it. And I take images. And you see, some images here do contain faces. Some of the images do not contain faces. OK. And I here use color to code them. And now what I have is that images are my inputs, OK, are the x's. So here-- full disclosure, I never use the little arrow above letters to denote vectors. So hopefully it will be clear from the context. When it's really useful I use upper or lower indices. Anyway. So this is the data matrix. Rows are inputs and columns are so-called features or variables, are the entries of each vector. OK. And I have n rows and p columns. Associated to this, I have my output vector. And what is the output vector? Well in this case, it's just a simple binary vector. And the idea here is, if there is a face, I put 1. If there is not a face, I put minus 1. OK. So this is the way I turn, like an abstract question, recognize faces in images, into some data structure that in a minute we're going to elaborate to try to actually answer the question, whether there is a face in an image or not. OK. So this first step, it's kind of obvious in this case, but it's actually a tricky step. OK. It's the part that I'm not going to give you any hint about. It's kind of an art. You have data and you have-- at the very beginning you have to turn them into some kind of manageable data structure. OK. Then you can elaborate in multiple ways. But the very first step is you deciding-- for example, here we decided to unroll all these numbers into vectors. This sounds like a good idea or a bad idea? One thing that you're doing is that the pixel here and the pixel here are probably related. And in this case there is some structure in the image. And so when you take this pixel 136, and you unroll it, it comes here. So they're not close. OK. Now here it turns out that if you think about it-- you'll see a minute. For those of you who remember, if you just took Euclidean distance, you take product of numbers and you sum them up. That's invariant to the position of the individual pixels. So that's OK. OK. But yet again, there is this intuition that, well, maybe here I'm losing too much geometric information about the context of the image. And indeed, while this kind of works in practice, but if you want to get better results you have to do the fancy stuff that Andrei was talking about today, looking locally and try to look at collection, try to keep more geometric information. OK. So I'm not going to talk about that kind of stuff. This up to date, a lot of engineering, and some good way to learn it. But we're going to try to just stick to simple representations. OK. So how do you build representation is now going to be part of what I'm going to talk about. So imagine that either and you stick to this super-simple representation or some friends of yours come in and put the box here in the middle, where you put this array of numbers and you extract another vector much fancier than this that contains some better representation of an image. OK. But then at the end of the day, my job starts when you give me a vector representation that I can trust. And I can basically say that if two vectors seem similar, they should have the same label. And that's the basic idea. OK. All right. So a little game here is, OK, imagine that these are just the two-pixel version of the images I showed you before. You have some boxes, some circles. And then I give you this one triangle. It's very original. Andrei showed you this yesterday. And the question is, what's the color of that? OK. Unless you haven't slept a minute, you're going to say it's orange. But the question is, why do you think it's orange? AUDIENCE: [INAUDIBLE] LORENZO ROSASCO: Say it again? AUDIENCE: It's surrounded by oranges. LORENZO ROSASCO: It's surrounded by oranges. OK. And she said, it's close to oranges. So it turns out that this is actually the simplest algorithm you can think of. OK. You check who you have close to you, and if it's orange, you say orange. And if it's blue, you say blue. OK. But we already made an assumption here, which we ask in the question, which is the nearby things. So we are basically saying that our of vectoral representation is such that, if two things are close-- so I do have a distance, and if two things are close, then they might have the same semantic content. OK. Which might be true or not. For example, if you take this thing I showed you here, we cannot just draw it, right? We cannot just take 200 times 200 vectors and just look at them and say, yeah, you know, a visual inspection. You have to believe that this distance will be fine. And so the discussion that we just had about what is a good representation is going to kick in. OK. But the assumption you make-- in this case visually it's very easy, it's low dimension-- is that nearby things have similar labels. One thing that I forgot to tell you in the previous slides, but it's key, is exactly this observation that in machine learning we typically move away from situations like this one, where you can do visual inspection and you have low dimensionality, to kind of a situation like the one I just showed you a minute before, where you have images. And if you have to think of each of these circles as an image, you want to be able to draw it, because it's going to be several hundred typically, or tens dimensional vector. OK. So the game is kind of different. Can we still do this kind of stuff? Can we just say that closed things should have the same semantic content? That's another question we're going to try to answer. OK. But I just want to do a bit of inception. This is a big deal, OK, going from low dimension to very high dimensions. All right. But let's stick for a minute to the idea that nearby things should have the same label, and just write the one line, write down the algorithm. It's the kind of case where it's harder to write it down than to code it up or just explain what it is. It's super simple. What you do is, you have data points, Xi. So Xi is the training set, the input data in the training set. X-bar is what I call X-new before. It's a new point. What you do is that you search. This just says, look for the index of the closest point. That's what you did before. OK. So here, I-prime is the index of the point Xi closest to X-bar. Once you find it, go in your dataset and find the label of that point. And then assign that label to the new point. Does that makes sense? Everybody's happy? Not super-complicated. Fair enough. How does it work? So let me see if I can do this. This is extremely fancy code. Let's see. All right. So what did I do? Let me do it a bit smaller. So this is just simple two-dimensional datasets. I take 40 points. The dataset looks like this. The dataset is the one on the left. OK. And what I do, I take 40 points. And to make it a bit more complex, I flip some of the labels. OK. So you basically say that the two datasets-- this is called the two moons dataset, or something like this. And what I did is that some of the labels in this sea, I changed color. I changed the label. OK. So I made the problem a bit harder. And here is what fortunately you don't have in practice. OK. Here we're cheating. We're doing just the simulations. We're looking at the future. We assume that because we can generate this data, we can look at the future and check how we're going to do in future data. So you can think of this as a future data that typically you don't have. So here you're a normal human being. Here you're playing god and looking at the future. OK. Because we just want to do a little simulation. So based on that, we can just go here and put 1, train, and then test and plot. So what you see here is the so-called decision boundary. OK. What I did is exactly that one line of code you saw before. OK. And what I did is, in this case I can draw it, because it's low dimensional. And basically what I do is that I just put in the regions where I think I should put orange, and the region where it think I should put blue. OK. And here you can kind of see what's going on. These are actually very good on the data, right? How many mistakes do you make on the new dataset? Sorry, on the training set? Zero. It's perfect. OK. Is that a good idea? Well, when you look at it here, it doesn't look that good. OK. There is this whole region of points, for example, that are going to be predicted to be orange, but they're actually blue. Of course if you want to have zero errors in the training set, there's nothing else you can do, right? Because you see, you have this orange point here. You have these two orange points here. And you want to go and follow them. So there's nothing you can do. So this is the first observation. The second observation is, the curve, if you look close enough, it's piecewise linear. It's like a sequence of linear pieces stuck together. If we just try to do a little game and generate some new data-- OK, so imagine again, I'm playing god now. I generate the new dataset that it should look like. So take another peek at this. OK. Oop. So now I generate them. I plot them. I train. And now let's test. OK. If you remember the decision curves you've seen before, what do you notice here? AUDIENCE: they're different LORENZO ROSASCO: They're very different. OK. For example, the one before, if you remember, we noticed it was going all the way down here to follow those couple of points. But here you don't have those couple of points. OK. So now, is that a good thing or a bad thing? Well the point here is that because you have so few points, the moment you start to just feed the data, this will happen. OK. You have something that changes all the time. It's very unstable. That's a key word, OK. You have something that you change the data just a little bit, and it changes completely. That sounds like a bad idea. OK. If I want to make a prediction, if I keep on getting slightly different data and I change my mind completely, that's probably not a good way to make a prediction about anything. OK. And this is happening all the time here. And it's exactly because our algorithm is in some sense is greedy. You just try to get perfect performance on the training set without worrying much about the future. Let's do this just once more. OK. And we keep on going. It's going to change all the time, all the time. Of course-- I don't know how much I can push this because it's not super-duper fast. But let's try. Let's say 18 by 30. So what I did now is just that I augmented the number of points in my training set. It was 20 or 30, I don't remember. Now it make it 100. So now you should see-- OK. So this is one solution. We want to play the same game. We just want to generate other datasets of the same. So maybe now it might be that I took them all. I don't remember how many there are. No, I didn't take them all. So, what do you see now? We are doing exactly the same thing. OK. And is this something that you can absolutely not to do in practice, because you cannot just generate datasets. But here what you see is that I just augmented the number of training set points. And what you see is now the solution does change, but not as much. OK. And you can kind of start to see that there is something going on a bit like this here. OK. So this one actually looks pretty bad. Let's try to do it once more. OK. So again, it does change a lot, but not as much as before. And you roughly see that this guy says that, here it should be orange and here should be blue. OK. So that's kind of what you expect. The more points you get, the better your solution would get. And if I put hear all the possible points, what you will start to see is that the closest point to any point here will be a blue point. OK. So it will be perfect. So if I ask you if this is a good algorithm or not, what would you say? AUDIENCE: It's overfitting the data. LORENZO ROSASCO: It's kind of a overfitting the data. But it is not always overfitting the data. If the data are good, it's a good idea to fit them. OK. But in some sense, this algorithm doesn't have a way to prevent itself to fall in love with the data when there are very few. And if you have very few data points, you start to just wiggle around, become extremely unstable, change your mind all the time. If the data are enough, it stabilizes, and in some senses, this setting, we're fitting the data, or as she's saying, overfitting the data. It's actually not a bad thing. OK. So this is what's going on here. AUDIENCE: What do you mean by overfitting? LORENZO ROSASCO: Fitting a bit too much. So if you look here. So here, if you look what you're doing here, you're always fitting the data OK. But here you're doing nothing else. And so if you have few data points, fitting the data is fine. Sorry, if you have many data points, fitting the data is just fine. If you have few data points, by fitting them you, in some sense, overfit in the sense that when you look at new data points, you have done a bit too much. OK. What you saw before, that you get something that is very good, because it perfectly fits that, but it's overfitting with respect to the future. Whereas here, the fitting on the left-hand side kind of reflects, not too badly the fitting on the right-hand side. OK. So the idea of overfitting and stability that came out in this discussion are key. OK. If you want everything we're going to do in the next three hours, understand how you can prevent overfitting and build a good way to stabilize your algorithms. OK. So let's go back here. This is going to be quick, because if I ask you, what is this? What would you say? AUDIENCE: [INAUDIBLE] [LAUGHING] LORENZO ROSASCO: So the idea is that, when you have a situation like this, you're still pretty much able to say what's the right answer. And what you're going to do is that you're going to move away from just saying, what's the closest point, and you just look at a few more points. You just don't look at one. OK. You look at, how many? boh? "boh" is very useful Italian word. It means, I don't know. So these algorithm-- it's called the k nearest neighbor algorithm, it's probably the second simplest algorithm you can think of. It's kind of the same as before. The notation here is a bit boring, but it's basically saying, take the points. Give them new points. Check the distance with everybody. Sort it and take the first k. OK. If it's a classification problem, it's probably a good idea to take an odd number for k, so that you can then just have voting. And basically everybody votes. Each vote counts one. And somebody says blue, somebody says orange, and you make a decision. OK. Fair enough. Well how does this work? You can kind of imagine. So what we have to do-- so for example here we have this guy. OK. Now let's just put k-- well, let's make this a bit smaller. So we do 40. Generate, plot, train. [INAUDIBLE] test. Plot. OK. Well we got a bit lucky, OK. This is actually a good dataset, because in some sense there are no, what you might call outliers. There are no orange points that really go and sit in the blue. So I just want to show you a bit about the dramatic effect of this. So I'm going to just try to redo this one so that we get the more-- yeah, this should do. OK. So this is nearest neighbor. This is the solution you get. It's not too horrible. But, for example, you see that it starts following this guy. OK. Now, what you can do is that you can just go in and say, four. Well, four's a bad idea. Five. You'd retrain them the same. And all of a sudden it just ignores this guy. Because the moment that you put more in, well, you just realize that he's surrounded by blue guys, so it's probably just, his vote just counts one against four. OK. And you can keep on going. And the idea here is that the more you make this big, the more your solution is going to be, what? Well you say, it's going to be good, but it's actually not true. Because if you start to put k too big, at some point all you're doing is counting how many points you have in class one, counting how many points you have in class two, and always say the same thing. OK. So I'm going to put here, 20. What you start to see is that you start to obtain a decision boundary, which is simpler, and simpler and simpler. OK. It looks kind of linear here. What you will see is that, suppose that now I regenerate the data. And you remember how much it changed before when I was using nearest neighbor with just k equal to 1. So of course here, you know, it's probabilistic. OK. So of course I'm going to get a dataset like the one I just showed you minutes ago, and I had it as fast as possible. Because if I pick 10, one is going to look like that and nine are going to look like this. OK. And when they look like this, you see, they kind of start to have this kind of line, like a decision boundary with some twists. But it's very simple. OK. And if at some point, if I put k big enough-- that is, the number of all points, it won't change any more. OK. It will just be essentially dividing the sets in two equal parts. So does that makes sense? So would it make sense to vote to make different votes? Essentially, the idea is, if the point is closest, his vote should count more than if a point is more far away? Yes, absolutely. Let's say here we're making the simplest thing in the world, the second simplest thing in the world, the third simplest thing in the world. It is doing that. OK. And you can see that you can go pretty far with this. I mean, it's simple, but these are actually algorithms that are used sometimes. And what you do is that, if you just look at this-- again, these I don't want to explain too much. If you've seen it before, it's simple. Otherwise it doesn't really matter. But the basic idea here is that each vote is going to be between 0-- so, you see here I put the distance between the new point and all the other points on top of an exponential. So the number I get is not 1, but it is between 0 and 1. If the two points are close, and the limits supposedly are the same, it becomes a 0, and it counts exactly one. If they're very far away, these would be, say, infinity and then we'd be close to 0. So the closest you are, the more you count. If you want, you can read it like this. You're sitting on a new point, and you put a zooming window. Yeah, like a zooming window of a certain size. And you basically check that everything which is inside this window will be closed. And the more you go farther away-- so the window is like this. And you deform the space so that basically what you say is, things that are far away, they're going to count less. And if I move sigma here, I'm somewhat making my visual field, if you want, larger or smaller, around this one new point. It's just a physical interpretation of what this is doing. There are 15 other ways of looking at what the Gaussian is doing. Voting, changing the weight of the vote is another one. OK. Why the Gaussian here? Well, because. Just because. You can use many, many others. You can use, for example, a hat window. And this is part of your prior knowledge, how much you want to weight. If you are in this kind of low dimensional situation, you might have good ways to just look inside the data and decide almost like doing by a visual inspection. Otherwise you have to trust some more broad principles. And it's again back to the problem of learning the representation and deciding how to measure distance, which are two phases of the same story. OK. And the other thing you see is that, if you start to do these games, you might actually add more parameters. OK. Because we start from nearest neighbor, which is completely parameter-free, but it was very unstable. We added k. We allow ourselves to go from simple to complex, from stability to overfitting. But we introduced a new parameter. And so that's not an algorithm any more. It's a half algorithm. A true algorithm is a parameter-free algorithm where I tell you how you choose everything. OK. So if they just give you something, say, yeah, there's k, well, how do you choose it? OK. It's not something you can use. And here I'm adding sigma. And again, you have to decide how you use it. OK. And so that's what we want to ask in a minute. So before doing that, just a side remark is-- we've been looking at vector data. OK. And we were basically measuring distance through just the Euclidean norm, OK, just the usual one, or this version like the Gaussian kernel that somewhat amplifies distances. What if you have strings, for example, or graphs? OK. Your data turns out to be strings and you want to compare them? Say even if they're binary strings, there's no linear structure. You cannot just sum them up. the Euclidean distance doesn't really make a lot of sense. But what you can do is that as long as you can define a distance-- and say this one would be the simplest one, just the Hamming distance. You just check entries, and if they're the same, you count one. If they're different, you count zero. OK. The moment you can define a distance of your data, then you can use this kind of technique. So this technique is pretty flexible in that sense, that whenever you can give-- you don't need a vectoral representation, you just need a way to measure, say, similarity or distances between things, and then you can use this method. OK. So here I just mentioned this, and that's what most of these classes are going to be, about vector data. But this is one point where, the moment you have k-- you can think of this case sometimes as a similarity. OK. Similarity is kind of concept that is dual to distances. So if the similarity is big, it's good. The distance small is good. OK. And so here, if you have a way to build the k or a distance, then you're good to go. And we're not going to really talk about it, but there's a whole industry about how you build this kind of stuff. So we give restraints. Maybe I want to say that I should not only look at the entry of a string, but also the nearby entry when I make the score for that specific. So maybe I shifted a value of the string a little bit. It's not right here. It's in the next position over, so that should come to bits. So I want to do a soft version of this. OK. Or maybe I have graphs, and I want to compare graphs. And I want to say that if two graphs are close, then I want them to have the same label. OK. How do you do that? The next big question is-- we introduced three parameters. They look really nice, because they kind of allowed us to get more flexible solutions to the problem by choosing, for example, k or the sigma in the Gaussian. We can go from overfitting to stability. But then of course we have to choose the parameter, and we have to find good ways to choose them. And so there are a bunch of questions. So the first one is, well, is there an optimal value at all? OK. Does it exist? But if it does exist, I can go try to estimate it in some way. If it doesn't, well it does not even make sense. I just throw a random number. I just say, k equals 4. Why? Just because. OK. So what do you think? It exists or not? What does it depend on? Because that's the next question. What does it depend on? Can we compute it? OK. So let's try to guess one minute before we go and check how we do this. OK. OK. I have to choose it. How do I choose it? What does it depend on? AUDIENCE: Size of this. LORENZO ROSASCO: One thing is the size of the dataset. Because what we saw is that a small k seems a good idea when you have a lot of data, but it seems like a bad idea when you have few. OK. So it should depend. It should be something that scales with n, the number of points, and probably also the training set itself. But we want something that works for all datasets, say, in expectation. So cardinality of the training set is going to be a main factor. What else? AUDIENCE: The smoothness of the boundary. LORENZO ROSASCO: The what? AUDIENCE: The smoothness. LORENZO ROSASCO: This smoothness of the boundary. Yeah. So what he's saying is, if my problem looks like this, or if my problem looks like this, it looks like k should be different. In this case I can take any arbitrary high k-- sorry, small k, I guess, or i. It doesn't matter, because whatever you do, you pretty much get the good thing. But if you start doing something like this, then you want-- k is enough, because otherwise you just start to blur everything. And this is exactly what he's saying. If your problem is complicated or it's easy. OK. And at the same time, this is related to the fact of how much noise you might have in the data, OK, how much flipping you might have in your data. If the problem is hard, then you expect to need a different k. OK. So it depends on the cardinality of the data, and how complicated is the problem? How complicated it is the boundary? How much noise do I have? OK. So it turns out that one thing you can ask is, can we prove it? OK. Can we prove a theorem that says that there is an optimal k, and it really does depends on this, on this quantities. And it turns out that you can. Of course, as always, to make a theory or to make assumptions, you have to work within a model. And the model we want to work on is the following. You're basically saying, this is the k nearest neighbor solution. So big k here is the number of neighbors, and this is hat because it depends on the data. And what I say here is that I'm just going to look at squared loss error, just because it's easy. And I'm going to look at the regression problem, not just this classification. And what you do here is that you take expectation over all possible input-output pairs. So basically you say, when I tried to do math, I want to see what's ideal. An ideally I want a solution that does well on future points. OK. So how do I do that? I think the average error over all possible points in the future, x and y. So this is the meaning of this first expectation. Make sense? Yes? No? So if they fix y and x, this is the error on a specific couple input and output. I give you the input. I do f(kx) and then I check if it's close or not to y. But what I want to do if I want to be theoretical is to say, OK, what I would really like to be small is this error over all possible points. So I take the expectation, not the one on the training set, the one in the future. And I take expectation so that if points are more likely to be simple, they will count more than points that are less likely to be simple. OK. AUDIENCE: What was Es? LORENZO ROSASCO: We haven't got to that one yet. OK. So Exy is what I just said. What is Es? It's the expectation over the training set. Why do we need that? Well because if we don't put that expectation, I'm basically telling you what's the good k for this one training set here. Then I give you another training set and I get another one, which is in some sense is good, but it's also bad, because we would like to have a take-home message that we hold for all training sets. And this is the simplest. You say, for the average training set, this is how I should choose k. That's what we want to do. OK. So the first expectation is to measure error with respect to the future. The second expectation is to say, I want to deal with the fact that I have several potential training sets appearing. OK. So in the next couple of slides, this red dot means that there are computations. OK. And so I want to do them quickly. And the important thing of this bit is, it's an exercise. OK. So this is an exercise of stats zero. OK. So we don't want to spend time doing that. The important thing is going to be the conceptual parts. I'm going to go a bit quickly through it. So you start from this, and you would like to understand if there exists-- so this is the quantity that you would like to make small, ideally. You will never have access to this, but ideally, in the optimal scenario, you want k to make this small. OK. Now the problem is that you want to essentially mathematically study this m minimization problem, but it's not easy, because, how do you do this? OK. The dependence of this function on k is complicated. It's that equation we had before, right? So you kind of just take the derivative and set it equal to zero. Let's keep on going into to. So what we are at is, these are the points I would like to make small. I would like to choose k so that I can make this small. I want to study this from a mathematical point of view. But I cannot just use what you're doing in calculus, which is taking a derivative and setting it equal to zero, because the dependence of these two k, which is my variable, it's complicated. OK. So we go a bit of a round way. We turn out to be pretty universal. And this is what we are going to do. First of all, we assume a model for our data. And this is just for the sake of simplicity. OK. I can use a much more general model. But this is the model. I'm going to say that my y are just some fixed function of star plus some noise. OK. And the noise is zero mean and variance sigma square for all entries. OK. This is the simplest model. It's a Gaussian regression model. So one thing I'm doing, and this is like a trick and you can really forget it, but it just makes life much easier is that I take the expectation over xy and a condition here. OK. The reason why you do this is just to make the math a bit easier. Because basically now, if you put this expectation out, and you look just at these quantities, you're looking at everything for fixed x. And these just become a real number, OK, not the function anymore. So you can use normal calculus. You have a real-valued function and you can just use the usual stuff. OK. Again, I'm going to going a bit quickly over this because it doesn't really matter. So this ingredient one. This is observation two. Observation three is that you need to introduce an object between the solution you get in practice and this ideal function. What is this? It's this kind of, what is called the expectation of my algorithm. What you do is that-- in my algorithm what I do here is that I put Yi, i OK, just the label of my training set. And the label are noisy. But this is an ideal object where you put the true function itself, and you just average the value of the true function. Why do I use this? Because I want to get something which is in between this f-star and this f-hat. So if you put k big enough-- so if you have enough points, this is going to be-- sorry, if you take k small enough-- so this is closer to f-star than my f-hat, OK, because you get no noisy data. And what I want to do-- oops. What I want to do is that I want to plug it in the middle and split this error in two. And this is what I do. OK. If you do this, you can check that you have a square here. You get two terms. One simplifies, because of this assumption on the noise, and you get these two terms. OK. And the important thing is these two terms are-- one is the comparison between my algorithm and its expectation. So that's exactly what we called a variance. OK. And one is the comparison between the value of the true function here, and the value of this other function. Sorry, this should be-- oh yeah. This is the expectation, which is my ideal version of my algorithm, the one that has access to the noiseless labels. OK. It's what you call a bias. It's basically because, instead of using the exact value of the function, you blur it a bit by averaging out. OK. You see here, instead of using the value of the function, you average out a few nearby values. So you're making it a bit dirtier. The question now is, how would these two quantities depend on k? How this quantity depends on k and how this quantity depends on k. OK. And then by putting this together, we'll see that we have a certain behavior of this, and a certain behavior of this. And then balancing this out, we'll get what the optimal value looked like. And this is going to be all useless from-- so these are going to be interesting from a conceptual perspective. We're going to learn something, but we'll still have to do something practical, because nothing of this you can measure in practice. OK. So the next question would be, now that we know that it exists and it depends on this stuff, how can we actually approximate it in practice? And cross-validation is going to pop out of the window. OK. But this is the theory that shows you that this would help proving a theory that shows that cross-validation is a good idea, in a precise sense. The take-home message is, by making this model and using this as an intermediate object, you split the error in two, and you start to be able to study. And what you get is basically the following. This term, by basically using-- so we assume that the data-- I didn't say that, but that's important. We assume that the data are independent with each other. OK. And by using that, you get these results right away, essentially using the fact that the variance of the sum of the independent variable is the sum of the variances. You get these results in one line. OK. And basically what this shows is that, if k gets big-- so variance is another word for the stability. OK. So if you have a big variance, things will vary a lot. It will be unstable. So what you see here is exactly what we observe in the plot before. If k was big, things are not changing as much. If k was small, things were changing a lot. OK. And this is the one equation that shows you that. OK. And if you just look at that, it would just tell you, the big is better. Big, respect to what? To the noise. OK. If there is a lot of noise, I should make it bigger. If there's more noise, I can make it smaller. But the point is that we saw before is that the problem of putting k large was that we were forgetting about the problem. We're just getting something that was very stable but could be potentially very bad, if my function was not that simple. OK. This is a bit harder to study mathematically. OK. This is a calculation that I show you because you can do it yourself in like 20 minutes, or less. This one takes a bit more. But he can get the hunch on how it looks like. And the basic idea is what we already said. If k is small, and the points are close enough, instead of f-star x, we are thinking of f-star Xk, Xi. And the i is closing off. OK. Now if we start to put k bigger, we start to blur that prediction by looking at many nearby points. But here there is no noise. OK. So that sounds like a bad idea. So we expect the error in that case to be either increasing, or at least flat with respect to k. So when we take k larger, we're blurring this prediction, and potentially make it far away from the true one. OK. And you can make this statement precise. You can prove it. And if you will prove it, it's basically that you have-- what happened? You have linear dependence. So the error here is linearly increasing or polynomially increasing-- in fact I don't remember-- with respect to k. OK. So the reason why I'm showing you this, skipping all these details, is just to give you a feeling of the kind of computation that answered the question if there is a optimal value and what it depends on. And then at this point, once you get this, you start to see this kind of plot. And typically here I put them the wrong way. But here you basically say, I have this one function I wanted to study, which is the sum of two functions. I have this, and I have this. OK. And now to study the minimum, I'm basically going to sum them up and see what's the optimal value to optimize this too. And the k that optimized this is exactly the optimal k. And you see that the optimal k will behave as we expected. OK. So here, one ingredient is missing. And it's just missing because I didn't put it in, which is the number of points. OK. It's just because I didn't renormalize things. OK. It should be a 1 over n here. It's just that I didn't renormalize. OK. But you announced it, and it's good, because it's true. There should be a 1 over n there. But the rest is what we expected. OK. In some sense what we expect is that if my problem is complicated, I need the smaller k. If there is a lot of noise, I need a bigger k. And depending on the number of points, which would be in the numerator here, I can make a bigger or a larger. k. OK. This plot is fundamental because it shows some property which is inherent in the problem. And the theorem that somewhat is behind it-- intuition I've been saying, repeating over and over, which is this intuition that you cannot trust the data too much. And there is the optimal amount of trust you can of your data based on certain assumptions. OK. And in our case, the assumption where this kind of model. So little calculation I'll show you quickly, grounds this intuition into a mathematical argument. OK. All right. So we spent quite a bit of time on this. In some sense, from a conceptual point of view, this is a critical idea. OK. Because it's behind pretty much everything. This idea of, how much you can trust or not of the data. Of course here, as we said, this has been informative, hopefully. But you cannot really choose this k, because you would need to know the noise, but especially to know how to estimate this in order to minimize this quantity. So in practice what you can show is, you can use what is called cross-validation. And in effect, cross-validation is one of a few other techniques you can use. And the idea is that you don't have access [AUDIO OUT] but you can show that if you take a bunch of data points, you split them in two, you use half for the training as you've always done, and you use the other half as a proxy for this future data. Then by minimizing the k-- taking the k that minimized the error on this so-called holdout set, then you can prove it's as good as if you could have access to this. OK. And it's actually very easy to prove. You can show that if you're just split in two, and you minimize the error in second half-- you do what is called the holdout cross-validation-- it's as good as if you'd had access to this. OK. So it's optimal in a way. Now, the problem with this is that we are only looking at the area and expectation. And what you can check is that if you look at higher order statistics, say that variance of your estimators and so on and so forth, what you might get is that by splitting in two, [AUDIO OUT] big is fine. In practice the difference is small, you might get that the way you split might matter. You might have bad luck and just split in a certain way. And so there is a whole zoology of ways of splitting. And the basic one is, say, split-- this is, for example, the simplest. OK. Split in a bunch of groups. OK. k-fold or v-fold cross-validation. Take one group out of the time. OK. And do the same trick. You know, you train here and calculate the error here for different k's. Then you do the same here, do the same here, do the same here. Sum the errors up, renormalizing, and then just choose the k that minimizes this new form of error. And if the data there are small, small, small, then typically this set will become very small. And then delimited, it becomes one, the leave one out error. OK. What you do is that you literally leave one out, train on the rest, get there for all the values of k in this case. Put it back in, take another one out, and repeat the procedure. Now the question that I had 10, 15 minutes ago was, how do you choose v? OK. Shall I make this two? So I just do one split like this? Or shall I make it n, so I do leave one out? And as far as I know there is not a lot of theory that would support an answer to this question. And what I know is mostly what you can expect intuitively, which is, if you have a lot of data points-- what does it mean a lot? I don't know. If you have two million, 10,000, I don't know. If you have a big dataset, typically splitting in two, or maybe doing just random splits is stable enough. What does it mean? That you try, and you look at how much it moves. Whereas if you have say-- you know, I don't know if it even exists, the implication like, you know, a few years ago there were micro-reapplication where you would have 20, 30 inputs, and you have 20 dimensions. And then in that case, you really don't do much splitting. If you have 20, for example, you try to leave one out and it's the best you can do. And it's already very unstable and sucks. OK. So in this case, there is work to be done. I mean, as far as I know, that's the state of things. OK. So we introduced a class of very simple algorithms. They seem to be pretty reasonable. They seem to allow us, provided that we have a way to measure distances or similarity, to go from simple to complex. And we have some kind of theory that tells us what is the optimal value of a parameter, a kind of practical procedure to actually choose it in practice. OK. Are we done? Is that all? do we need to do anything else? What's missing here? One thing that is missing here is that most of the intuition we developed so far are really related to low dimension. OK. And here, very quickly, if you just do a little exercise where you try to say how big is a cube that covers 1% of the volume of a bigger cube of a unit length? OK. So the big cube is volume 1. The length of that is just 1. And it ask you, how big is this, if it has to cover 1% of the volume? It's really to check that these are just going to be a dth-root where d is the dimension of the cube. And this is the shape of the dth-root. OK. So if you're in low dimension, basically, 1% is intuitively small within the big cube. But as soon as you're go in higher dimensional, what you see is that the length of the edge of the little cube that has to cover 1% of the volume becomes very close to 1, almost immediately. It's this curve going up. OK. What does it mean? That if you say, our intuition is, well, 1%. It's a pretty small volume. If I just took the neighbors in 1%, they're pretty close, so they should have the same label. Well, in dimension 10, it's everything. OK. So our intuition-- now you can say that probably there is something wrong with my way of thinking of volume, sure. But the problem is that we have to rethink a bit how you think of dimensions and similarity in high dimension, because things that are obvious low dimensional start to be very complicated. OK. And the basic idea is that this neighbor technique just looks at what's happening in one region. But what you hope to do is that if your function actually has some kind of global properties-- so, say for example a sign is the simplest example of something which is global, because the value here and the value here are very much related. And then it goes up and it's the same. And then it goes down. So if you know something like this, the idea is that you can borrow strength from points which are far away. In some sense the function has some similar properties. And so you want to go from a local estimation to some form of global estimation. OK. And instead of making a decision based only on the neighbors of the points, you might want to use points which are potentially far away. OK. And this seems to be like a good idea in high dimensions where the neighboring points might not give enough information . And that's kind of what's called, curse of dimensionality. OK. So what I want to do next-- we can take a break here-- is discussing least squares and kernel least squares. OK. But what we're going to do is that we're going to take a linear model of our data, and then we are going to try to see how you can estimate and learn. And we're going to look at bit of the computation and a bit of the statistical idea underlying this model. And then we're going to play around in a very simple for way to extend from a linear model to a non-linear model and actually make it non-parametric. I'll tell you what non-parametric means. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_32_Cognition_in_Infancy_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LIZ SPELKE: I want to talk about-- you asked about development of knowledge within infancy-- and I want to talk about one case where we've seen an interesting developmental change. One that I don't think we really fully understand. I'm sure we don't fully understand it. I'm not sure we understand it at all. But it seems to be there. And it has to do with effects of both, maybe, inertial properties of object motion and also effects of gravity on object motion. Let me just tell you the result first. These are studies that were done by In-Kyeong Kim a long time ago. But recently enough, that video was part of our toolkit, which wasn't always the case, and involved showing babies videotapes of events in which an object held by a hand was placed on an inclined plane, released, and started to move. And in one study, babies either saw a plane that was inclined downward, the hand released it, and it rolled downward with the natural acceleration, or the plane was inclined upward, the hand released it, and it rolled upward, decelerating. OK? We studied this in adults as well, and for adults, adults reported that it looked like this hand set the ball in motion. In fact, to control the motion better, there was a sling shot apparatus that actually set it in motion. But it underwent what looked like a natural deceleration from the point at which the hand released it and as it rose up. Half the babies were bored with this event, and half were bored with that. And then, at test, we switched the direction of the orientation of the ramp so that if babies had been seeing something move downward, now they were seeing events in which it moved upward. And in one of those two events, it underwent the natural acceleration that adults would expect, which is to say, a change in the motion that the infants had previously seen. So these guys saw this thing rolling downward and accelerating. And here in the natural event, they're seeing something that's being propelled upward, and it's decelerating. OK? In this event, they saw the same motion pattern that they saw before, so if they saw speeding up here, they saw speeding up here, contrary to effects of gravity on object motion. OK. So the question was, if the babies were sensitive to effects of gravity, they should look longer when the thing accelerates, you know, speeds up as it's moving upward than they do when it slows down as it's moving upward. On the other hand, if they're not sensitive to gravity, and they're just representing these events as speeding up or slowing down, they might show the opposite pattern. OK? And that's actually what we found in both conditions of that study. At five months of age, babies showed the pattern opposite to what adults would expect, as if they thought an object that speeds up in the beginning is going to continue speeding up irrespective of the orientation of that plane, with respect to gravity, OK? So failure at five months, success at seven months. At seven months, we get a reversal, and they flip in their looking times here. Well, we thought, maybe the 5-month-olds will succeed at a simpler task. Suppose we start out just showing them a flat surface and constant speed, so we're not engendering any expectation that this object is going to speed up over time or slow down over time. They're just seeing a constant motion, and then they get tested with events in which the object is placed either at the top or at the bottom of this ramp. It's released, and it speeds up in both cases. In that case, will they view this motion as more natural and look longer at that one? Or, you might ask, would they show the opposite pattern [AUDIO OUT] if these events are difficult for them, show the opposite pattern? What they actually showed was absolutely no expectation, the 5-month-olds, in that case. OK? They seemed utterly uncommitted as to whether it was more natural for the thing to be moving downward than to be moving upward after seeing it move on this flat surface. Then, at seven months, kids responded as adults would. Then we thought, maybe there's just a problem with video. Maybe kids don't understand that vertical in a video corresponds to vertical in the real world. So we just ran a control experiment to see if that was true, by familiarizing kids with downward motion in the real world and then presenting those same two test events. And now they showed the pattern that adults would show. That's not showing any knowledge here, it's just saying, yeah, they can distinguish downward from upward, and they can relate real events to video. But they don't seem to have any prior expectation that objects will move downward when they're rolling on an inclined plane. And all these things seem to develop between five and seven months, which I think provides us with an interesting window for asking what's developing here. One possibility is what's developing is something very local. But I don't think so, because there's another change also occurring between five and seven months in what looks like a very similar situation. These are experiments that were conducted in Renée Baillargeon's lab, where she did these very simple studies where you'd have a single object in an array, and then a hand would come in holding another object and place it on that array. And in that top study, she compares infants' reactions when the hand places the object on top of the box, releases it, and it stays there, versus places the object on the side of the box, releases it, and it stays there. OK? Now, if you're a 3-month-old infant, you are equally happy with those two events. You do not look differentially at them. But by five months, infants do. They look longer when the object seems to be stably supported by the side of the box, than when it's stably supported by the top of the box. But then she goes on and asks, do infants make distinctions among objects that are stably supported from below? As, by the way, they were in all those studies with the rolling on the ramps, right? Do they make the kind of distinctions that we would make based on the mass of the object, the physical force on the object, and so forth? And to get at that, she did a second study where she has a box, and she either places it in the middle of the top box or way off to the side where we would expect it to fall, OK, not to be stably supported. 7-month-olds get that right, 5-month-olds do not. So across these different situations, it looks like we have this developmental change in what infants know. I mean, I think these studies raise a lot of questions that they don't answer about what infants are learning over this time period. I'm excited about that because I think they also give us a method for addressing those questions. OK. And Tomer has been working on chasing that [AUDIO OUT],, which would be great. OK. So I've been asked already by a bunch of you, what happens at the very beginning of visual experience? I do have some slides on that. I do want us to take a break, but let me go through them very quickly. There's been a little bit done with newborn infants, and where they've been studied under conditions where we are confident that they're able to see what they're being presented with. It looks as if some of these-- there's evidence for these abilities in newborns, but most of the things I told you about have not been tested in newborn infants and would be really hard to test in them. But fortunately, humans aren't the only animals that have to be able to find the objects in a scene and track them through time. Other animals do that as well. And many animals seem to succeed at representing objects under the conditions where infants succeed. The big problem we have with animal models is that in many cases, the animals are way more competent than the infants. And they succeed in cases where infants would fail. OK? But at least where they succeed where infants succeed, we can ask, what would happen with controlled rearing? And can we at least get an existence proof that in a mechanism of the sort, that the abilities that we see in infants can be performed by some nervous system on first encounters with visible objects and types of events that it's now being asked to reason about? So this has been done with controlled reared experiments. I'm going to talk about just a few, show you the results of just a few experiments that were done on chicks. Chicks are being used a lot lately because they grow up pretty quickly, they're easy to raise, and they show this innate behavior toward objects. That gives us a nice indicator, which is in some ways a little bit like preferential looking, though opposite in sign. They imprint if you show them an object repeatedly, and a chick has been isolated, so there's no other chicks or hens around in its environment. The object is the only moving object they see. They will imprint to it, treat it like another-- behave as they would with their mom, were she there. And in particular, if you then take the chick and put them in a novel environment where they're a little bit stressed, they will tend to approach that object. So this has the same logic as looking longer at a novel thing. You have a selective approach to a familiar thing, and you can now run, on chicks, the kinds of experiments that have been run on human infants. So here is one imprinted chick to a-- these are experiments that have actually been done a while ago, in some case they're much more recent, imprint a chick to a triangle, and then present them with a triangle whose center is missing, either because it's occluded or because there's a gap there. Who's mom in this case? The one with the occluded center, not the one with the gap at the center. OK? Consistent with findings with infants in one way, these chicks are perceiving occlusion, but they're doing better than the infants. This works when the objects move behind the occluder. If you move objects behind the occluder, you get all the same kind of motion effects that I was talking about with infants. But it also works if the object is stationary. So the chicks are better than the infants in that case. What about object permanence? Here's a task that babies can't solve until they're 12 months old-- well, eight months in this case, 12 months in this case-- that chicks solved the first time they're presented with an imprinted object that moves out of you. I should have said that on the previous slide. The chicks never saw occlusion until the imprinting test. Here, they're imprinted to mom on the first two days of life, but there's no other surfaces in the environment, so they never see her occluded by another surface. Maybe they see her occluded by their own wing or something, but not by other objects in the environment. OK. There's two screens there. A chick is restrained in a Plexiglas box and sees mom disappear behind one of the screens. They will go and search behind that screen for her. Only at 12 months do human infants solve the following problem. Mom has disappeared behind and been found behind this screen five times in a row. Now she goes there, where will chicks search? A baby, until 12 months, will search behind the first place where she was hidden, where they found her in the past. A chick is more like a 12-month-old baby, shows the more mature pattern. He goes where mom actually is. AUDIENCE: Are these images or actual moms? LIZ SPELKE: These are actual moms. This is imprinting to either a ping pong ball in some of these studies or a cylinder in other studies, dangles on a wire during the imprinting period so you can see that it moves. It moves here, and it kind of dangles and moves behind one screen or another screen. And then the chick is released, and the question is where will the chick go? Yeah. AUDIENCE: Are these other chicks? LIZ SPELKE: I think they imprint them for two days post hatching. So they're hatched in an incubator. In the killer study I'm going to tell you about, the incubator's in the dark. They spend two days in-- I think they spend a day in the dark getting used to just walking around. And then starts the visual experience, and it's controlled, so no occlusion. But you do see this one object that they get to imprint you, and then they later go toward the object. OK? So an existence proof that such a capacity could exist doesn't tell us that it does exist in a young infant, but it could. Here's the one that was done recently, that's my favorite study in this series on solidity. OK, here's a study where the chick is hatched in the dark and spends most of the day for the first three days of its life in the dark. But during a certain period each day, it's put in a Plexiglas box with black walls, a black floor, black ceiling. And through the Plexiglas, there is a single object that dangles that they imprint to. OK? Now, they can't touch the object, so they never get evidence whether this object is solid or not. They can't peck at it, right? They can peck, but they can only peck at this black surface that they can't see or at this transparent surface that they also can't see. So they might learn there are objects in the environment, but they're not going to have any visual characteristics like this guy does. OK? All right. So then after this experience, the chick stays in this box, and in that box, sees a series of events involving this object. In the first set of experiments, the object moves behind one screen and then is revealed there, or moves behind the other screen and then is revealed there. In this particular study, the chick does not get to go out to get to that object. They did a series of studies before this one where they did, but in this case study, they don't. Then they see a second set of events where all they see is mom starts moving toward the space between the two screens. Then there, a screen comes down, so they can't see what happens next. And then when the screen comes up, she is no longer visible, but she then emerges behind either one screen or the other, basically teaching the chicks, in effect, mom can go behind each of these two screens, OK, either of these two screens. And then comes the critical test. In the critical test, mom starts moving toward the midpoint of the two screens. The screen comes down, and when the screen-- sorry, the big occluder-- vision is blocked, and then when that curtain raises again and the two screens are again visible, they've been rotated backward. And across a series of studies, they vary the size of mom and the degree of rotation of the screen. And the question is, will the chick go to the side where the screen's rotation is consistent with a solid object? And they do. I want to argue from that that knowledge of solidity-- well, knowledge in some sense-- representations that accord with solidity is innate in chicks. And what I mean by innate is simply that it's present and functional on first encounters with objects that exhibit that property. This chick has not had the opportunity to peck at these objects, or observe them bumping into each other, or anything like that. And the first time they see this degree of rotation, they make inferences that are consistent with that principle. This doesn't tell us that human infants do it. To be convinced of that, I'd want to see evidence for this ability in an animal that's a much better model for human object cognition. Or I'd want to see more evidence from the chicks to convince me that, actually, the chick is a really good model of human object cognition. But I do think it should encourage us to start thinking about how nervous systems could be wired to exhibit this property in the absence of specific learning and experiences with it. Here are some questions that we could talk about in the Q&A later. I can give you the one-liner on it. On the issue of compositionality, the question is, do babies have a laundry list of rules about how objects are going to behave, or are they building some kind of unitary model of the world that accords with certain general properties? I think the evidence, starting at least at eight months of age-- lower than that, we don't know-- but starting at that age, I think the evidence favors the second possibility. It comes primarily from these beautiful studies that were conducted by Susan Carey and her students and collaborators where they presented infants with an event that violated cohesion. So this could be a sand pile that's poured onto a stage, or a cookie that's broken into two pieces before those pieces are put in boxes, or a block that falls apart when it's hit by another block. And then, she asks, having seen that this object violated this property, do they expect it to accord with the other properties? And the answer is no, they don't. OK? So for example, if you've seen, there's-- in that causality study where an object moves behind a screen, and then another thing starts to move-- if instead of starting to move, it falls apart-- it violates cohesion, falls apart-- infants no longer expect the first object to hit it. OK? Maybe it fell apart on its own, OK? They don't assume that it's going to behave like an object in other respects. They also don't assume that sand piles will move continuously, that they won't pass through surfaces, that a cookie that is broken in two-- if you have two cookies in one box and one cookie in the other, the babies crawl to the two. But if you have one large cookie, and you break it in two and put it in one box, they're neutral between those two options. OK? So it looks like the system is acting like an interconnected whole. What the babies learn about one aspect of an object's behavior bears on the inferences they make about other aspects of its behavior. The final thing I want to end with that I think makes this point is very recent work that was conducted by Lisa Feigenson-- that I think Laura Schulz may talk about as well. Josh mentioned it last Friday, but didn't really describe it-- that I think suggests that at least at the end of infancy-- and again, we don't know what's happening earlier-- infants' understanding of objects isn't only forming an interconnected whole, but it centers on abstract notions about the causes of object motions. So these are studies that picked up on some old findings from a variety of labs. Here's one that Baillargeon had worked on, and I did studies on it as well, where infants see an object that's moving in an array that has two barriers, but the barriers are mostly hidden by a screen, and the object moves so that it's fully hidden by the screen. The question is, where will it come to rest? Looking time studies say babies expect it to come to rest at the first barrier in its path. They look longer if they find it when the screen is raised behind the second barrier, as if it passed through that surface in its path, a solidity violation. Here's another study that Baillargeon and others have studied showing that by three months of age, infants expect objects to fall in the absence of support. If you push a truck along a block so that it stops, but under conditions where it stops, when it gets to the end of the block or continues off the block and doesn't fall, infants look longer in the latter case. And what Feigenson asked is, suppose we don't let look infants look as long as they want. Suppose we limit their looking time to just a couple of seconds, so they've looked equally at these two outcomes. What is their internal state? Are they just bewildered in the case where the object did something impossible? Or are they seeking to understand what's happening in the world? Is this a learning signal for them? So she did two experiments-- actually, more than two-- but asked two questions with experiments. One question was, suppose after showing this event, or this event, or this, or this one, after showing an event where an object behaves naturally versus apparently unnaturally, you now teach infants in effect, try to teach infants in effect some new property of the object. So you pick the object up and squeak it. And the question is, do the babies learn that the object makes that sound? And what she finds is that the infants learn much more consistently about the object whose previous behavior was unpredicted than about the object whose previous behavior was predictable. OK. I think this is both good news and bad news about all the looking time methods I've been telling you about. The good news is looking time really does seem, when you let babies look for as long as they want, it really does look like that's tracking what they're seeking to learn about their exploration and learning. The bad news is that when you restrict looking time to just a very short amount of time, there are many, many things that could be going on. They could be attending a lot, or they could be attending a little. It's a very crude measure that's not telling us that. But here's a richer measure suggesting differential learning in these two cases. The other study, I think, is even cooler. In the other studies, after showing infants a violation event, she handed infants the object, and allowed them to explore it, and looked to see what they would do. And what she found is that they do different things depending on the nature of violation. So when there was a solidity violation, they take the object, and they bang it on the tray in front of them. OK? In the case of a support violation, they take the object, and they release it. OK? So if they're specifically oriented at 11 months, and we don't know what happened earlier, they're specifically oriented to expected properties of the object that could be relevant to understanding the apparent violation that they saw. To summarize, it looks like there is a system that is growing over the course of infancy, as it's clear from all the questions that we can continue to discuss. There's a lot we don't know about this system. But at no point in development do infants seem to be perceiving just a two-dimensional, disorganized world of sensory experiences. At no point do they seem oriented to events going on in the visual field as opposed to the real world when you test them in these situations where you're asking them questions about properties of the world. They seem to be oriented to properties of the world itself. And to start out already, as early as we can test them, in a few cases, like the center-occluded objects, that means newborns. To start out already with a system which, although it's radically different from ours, we see that most of the things that we know about, most of the information we can get about objects from a scene, they do not get. Nevertheless, we're seeing core skeletal abilities that we continue to use throughout life and that seem to be present to serve as a basis for learning. There have been lots of studies in which infants have been presented with people or other self-propelled objects that have used the same logic as the studies for looking at naive physics to get at something like naive psychology in infancy, what they know about object motion. Most of them are on babies that are older than the babies from the object studies. Those studies ran mostly up to about four months of age. These studies are starting later than that, but they give us a starting point. And here's one slide that tries to capture just about everything that I think we know about what 6- to 12-month-old infants, how they represent agents. First of all, they represent people's actions on objects as directed to goals. So this is a basic study that was conducted-- a whole series of studies, one of many-- conducted by Amanda Woodward, who was the advisor for Jessica Sommerville, who will be here giving a talk on Thursday afternoon. They're very simple studies in which she presents babies with two objects side by side, and then they see a hand reach out and grasp one of those two objects. And the question is, how do babies represent that action? Do they represent it as a motion to a position on the left, or do they represent it as an action with the goal of obtaining the ball? And to distinguish those two possibilities, she then reverses the positions of the two objects, presents the hand in alternation, taking a new trajectory to the old object versus the old trajectory to a new object. And the babies look longer when the hand goes to the new object, suggesting not that they can't represent trajectories, but that they care more about-- bigger news is the agent's goal. When the goal changes, that's a bigger change for the infant than when the motion simply changes to a new path and a new endpoint. That's at as young as five months of age. Then there's been studies showing that infants' goal attribution depends on what is visible from the perspective of the agent. So if two objects are present, and the agent consistently reaches for one of them, babies in some sense, in some poorly-understood sense, represent that agent as having a preference for the object that they've chosen to go for over the other object. But if the object they didn't go for is occluded, and there's no evidence that that agent ever saw it, they don't make that preference, inference. OK? Third, infants represent agents as acting efficiently. And this is true whenever they're given evidence for self-propulsion, whether the object that's moving in a self-propelled manner has the features of an agent or not. So with these classic studies by Gergely and his collaborators that continue to the present day, but started, now, I guess 20 years ago, they present infants with two balls that engage in self-propelled motion and even a little kind of interaction. And then one of the balls jumps over a barrier to get to the other ball. And across a series of familiarization trials, the barrier varies in height. The jump is appropriately adapted to it. And the question is, what do infants infer this agent will do when the barrier is taken away? Will it engage in one of its familiar patterns of motion, or will it engage in a new pattern of motion that's more efficient to get to the object? And the finding is, they expect that new motion and look longer at the more superficially familiar but inefficient indirect action. Here are my favorite studies. This is one that Shimon talked about yesterday afternoon. He talked about one version of this. The original study was conducted by Sabina Pauen, and Rebecca Saxe and Susan Carey did interesting extensions on it. It uses the simplest imaginable method. They present babies first with two objects, stationary, side by side, one with animate features, a face and sort of a fuzzy body, tail-like body, the other, an inanimate ball. They're both stationary, and the objects were chosen to be about equally interesting. The babies look about half the time at each one. Then, they see a series of events in which these two objects are stuck together and undergo this very irregular pattern of motion. This was actually some kind of parlor trick toy that was sold for a while. They got them started on this study. There's a mechanism inside the ball that's actually propelling the two objects around. But the question is, which of these two objects does the baby attribute the motion to? And to find that out, they subsequently separated the two objects again, and ask again, which one will the infant look at more? And after seeing this motion, the infant looks more at the one that has the animate features. Although both objects underwent the same motion, they attribute that motion to this guy, not the other guy. It follows then that they're perceiving this-- they're representing this guy as causing the other guy's motion. Right? The other guy is not seen as causing its own motion. The motion is being caused by this guy. This is at seven months of age. Now, even stronger evidence that infants infer causal agents come from these beautiful studies that Rebecca Saxe, and Susan Carey, and Josh Tenenbaum conducted 10 years ago now, that went back to this efficient situation where you see an object efficiently going over barriers of variable heights to get to a new position on the stage. The one thing they added is that the object is manifestly inanimate. It's a beanbag. And the kid had a chance to play with it before the study began. They can feel that this is not an animate object. So if this isn't an animate object, and infants are actively trying to explain the motions of objects that they see, they're going to need another kind of explanation. And what Saxe, Tenenbaum, and Carey showed is that infants infer that there is an agent, off-screen, on this side of the screen, that set that object in motion. And they show that by their relative looking time to an event where a hand comes in on that side of the screen, which is consistent with that causal attribution, versus it comes in on the other side, which is inconsistent with it. And really, a pretty manipulation. If in fact babies are making causal attributions here, then you ought to be able to screen them off. If you make a simple change to the method, you show evidence that actually, this object is animate. If it is animate, then you don't need to infer another cause. So they ran that study as well and showed that when you change from an inanimate object to an animate one, you no longer get this effect. OK? So it really looks like they're seeking to explain these events and doing so in accord with the principle that agents can cause not only their own motion, but also can make changes in the world, cause motions and other changes in objects. And a final study that shows this, I told you that if a truck goes behind a screen toward a box, and then the box subsequently collapses, babies do not infer that the truck hit the box. OK? Collapsing takes this outside the domain of physical reasoning for those young babies. OK? But suppose instead of a truck going behind that-- this is work by Susan Carey as well. Sorry, I should have put it on here, Muentener and Carey. If instead of a truck, a hand goes behind that screen, now they infer that the hand did contact the object. OK? So hands can make things move, and they not only can make things move, they can make things do all sorts of things that objects otherwise won't do, like fall apart, OK? All right. So one problem, as I said, with all these studies is that they're all older infants, many of them much older infants like 8- and 10-month-olds, just about all of them 6 months old or older. And when you test younger infants, you get failures on some of these tasks. So these goal attribution tasks work with 5-month-old infants, but they fail with 3-month-old infants. 3-month-old infants seem uncommitted as to where this hand is going to go once the two objects exchange locations. And that raises the question-- to me, the most interesting question-- what kinds of representations of agents, if any, does a younger infant have before they're able to act on the world by manipulating things, which starts around five months of age? And tell you just quickly, two kinds of studies that I think get at this-- imperfectly, but they're getting there. One is, again, going back to studies of controlled reared animals. Chicks, again, in particular. There have been imprinting studies done where chicks are raised in the dark, and all they get to see are video screens in which you get two objects engaging in this simple causal billiard ball type event. One object starts to move, contacts a second object, and at the point of contact, it stops moving, and the other object starts to move. OK? And they see that repeatedly. And now, OK, I told you that if chicks are raised in isolation, but there's a moving object there, they'll imprint to it. I was treating the imprinting object as just an example of an object. But of course, the imprinting object is mom, and she's an agent. So it should be a self-propelled object, really, right? So given a choice between these two, do they selectively imprint to one over the other? And the finding is that they do. The two objects have different features, I think, different colors, maybe different shapes as well. If you now present one of them at one end of their cage and the other at the other end, they'll go to object A over object B as if they saw that as the object that set the other object in motion. But now you could ask, is that because they see A as causal, as causing its own motion? Or is it for other superficial reasons like A moved first, or A was moving at the time of the collision, and B was stationary? So they tested for all of these things one by one, but this is their killer experiment, the test for all of them at once. They present exactly the same events. The only difference is you don't see A initially at rest and starting to move. OK? You never get to see that. There are screens on the two ends of the display, so all you see is that A enters the display already in motion and contacts B. So you have no evidence as to whether A is self-propelled or not, but you see the two objects interacting otherwise in the same way. That makes the effect completely go away. I think this is logically a little like the studies with infants where you show that the thing is animate or not, and it effects whether they expect that there's an agent there or not. It suggests that infants are representing A as causing its own motion. They're representing it as causing B's motions, so A is a better object of imprinting than B is. And all of this can be abolished if A isn't seen to have the first causal property, they're not inferring the second. So it's, again, an existence proof kind of argument saying, you can get this kind of system working in an animal that hasn't had prior visual experiences in which they've seen objects in motion or in which they themselves have been-- they haven't had any other encounters with objects, so they haven't been able to move things around. So that's one way of trying to get at early representations. Here's another way that I hope Sommerville will talk about on Thursday night, because she pioneered this method with Amanda Woodward and Amy Needham. You can give a 3-month-old infant, who otherwise wouldn't be reaching for things for another two months, you can give them the ability to pick up objects by equipping them with mittens that have Velcro on them and presenting them with Velcro objects. So now they can make things move. And when you do that, you see interesting changes in their representations of these events. So an infant who has played with an object while wearing a sticky mitten, who subsequently looks at events in which another person wearing that mitten reaches for that object or for another object, now they look like a 5-month-old and represent that reaching as goal-directed. But on the other hand, if they saw those same events without the mittens, they failed. So that's saying that the infant's own action experience can elicit these action representations. And it raises, I think, all sorts of interesting questions about, do infants have to learn one by one what the properties of agents are? Or is it possible that once these representations of goal-directed actions are elicited, we'll see other knowledge of agents already present? So let me just give you one finding that suggests we might get the second. This is a study that Amy Skerry performed rather recently. She was interested in this representation of efficient action before babies are able to reach for and pick up objects themselves. Do they already represent the actions of agents as directed to goals efficiently? Did they already expect that agents will move efficiently to achieve their goals? So to get at that, she gave infants, 3-month-old infants, sticky mittens experience with an object where they got to pick up the object and then showed them events in one-- split them into two different conditions, and in both conditions, they see events in which a person moves on an indirect path to an object. In one of the series of events, there's a barrier in the way, so that motion is the best way to get there, probably the only obvious way to get there. In the other case, the barrier is in the display, but it's out of the way, so this is not an efficient action. And then at test, the barrier is gone in both conditions, and the person either moves efficiently or moves inefficiently. Now, in this control condition, the baby does not expect efficient action. But in the condition where they previously had the sticky mittens experience, which presumably told them that this is actually the person's goal, that they're attempting to get there, once they have that, what seems to follow immediately from that is this efficiency, this principle of efficient action, right, that you'll move to the goal on the most direct path possible, even though you've never seen that direct path. And even though when the infant was playing with the object with the mittens on and picking up objects for the first time in his life, there were no barriers present. They never had to do anything indirect. They could always get that object directly. Yet they're expecting direct action only when they've seen efficient action in another agent. OK. So in summary, it looks like these abilities are all there quite early. They're unlearned, at least some of them are unlearned, in chicks. And they can be elicited before-- at least some of them can be elicited before reaching-- develops in young infants. But infants-- and this is important, I think, for what Alia's going to be talking about-- infants' understanding of agents is limited. It's radically limited, just like their understanding of inanimate objects is. And here are a few, I think, really interesting limits. Although infants are sensitive to what's perceptually accessible to an agent, they don't seem to represent agents as seeing objects. What do I mean by that? Well, here's a experiment with exactly the structure of the Woodward reaching experiment. Two objects present, and a person acts with respect to one of them. But instead of reaching for one object, she looks at one of the two objects. And now, you ask, after you exchange the two objects' positions, do babies find that she's doing something newer if she looks at the other object? Or do they find she's doing something newer if she looks in a different direction than she looked in before? OK? At 12 months, infants view looking as object-directed, by this measure. Younger than 12 months, they do not. They don't seem to view the orientation of the person as a look that's directed to one of these two objects, by that measure. Now, the age at which they succeeded, this is also the first age at which infants start to show this really interesting pattern in their communication with other people, that Mike Tomasello has written about a lot. It's the first age at which they'll start to alternately look at objects, and look at another person, and attempt to engage their attention to the object, by checking back and forth between looks at the person and looks at the object, or by pointing to the object, or by following another person's point to an object. All of that comes in at the end of the first year. It's not there earlier. Younger infants, younger than about 12 months of age, don't even seem to expect that if a person reaches for one of two objects, they'll tend to reach for the object that they were looking at. OK? We thought for sure babies would succeed at that task early. They don't reliably succeed at it until 12 or even 14 months of age. And finally, babies attribute first order but not second order goals to agents. So if they see an agent pull on a rake and then reach for an object, they perceive the agent's goal as the rake, not the object that they're reaching for. So very limited representations. Also, as far as we know, none of these are unique to humans. I think this raised all sorts of questions that we might want to talk about in the Q&A later. What's the relationship between infants' representations of agents and their representations of the things that agents act on? Clearly, these should be related in some interesting ways. Although, agents can do things that objects can't do, like move on their own, and formulate goals, and act on things that are visually accessible to them. Agents also are objects, right? And we're subject to all the constraints on objects. We can't walk through walls. If we want to make something move, we have to contact it. And babies are sensitive to those constraints. And I think this raises all sorts of questions that to date, research on infants hasn't really answered. Is there some hierarchy of representations where you've got all objects, and then you've got these especially talented objects that are agents, right? Or I think one way Tomer has put it is maybe we're all agents, but objects are just really bad agents, right, that can't do very much. That's one possibility. It's also possible that these are just separate systems at the beginning, and they have to get linked together over time. I think these are all answerable questions. I said that babies don't see looking as directed to objects, but they do from birth respond in a pro-social way to looking that's directed to them. So an infant will look longer at someone whose eyes are directed to them than someone whose eyes are looking away. That's true for human infants. It's also true for infant monkeys. If a monkey is presented with this display, and they're over here, they'll look longer than if they're over here. Right? When they're here, it looks like that guy is looking at them. They also, human and monkey infants engage in eye-to-eye contact with adults from birth. They also, human infants and monkey infants tend to imitate the gestures of other people, interestingly, only when those people are looking at them, not when they close their eyes. Then they'll imitate them as you saw in those beautiful films that Winrich showed from the Ferrari group, attentive to the person and then trying to reproduce the person's action. This has been shown with baby chimpanzees and baby monkeys as well as with human newborn infants. And finally, I said infants don't follow gaze to objects. That's true, but it's also true that if they see a person who's looking directly at them, and then the person's gaze shifts to the side, they'll continue to look at the person, but their attention will undergo a momentary shift in the direction of the shift of the other person's gaze. OK? So this has been shown at two months of age and also with newborn infants. It's been shown with photographs of real faces and also with schematic faces. The way to show that attention is shifting is to present infants with an event that could never happen in the real world. You have an image of a face, the eyes shift to the side, either left or right, and then the face disappears, and a probe appears either on the left or on the right. They'll get to the probe faster if it appears on the side to which the person shifted their gaze. And nice control studies show that it's not just any kind of low level motion in that direction that gets them there. So this could be a sign of something like infancy direct gaze. And it engages something like a state of engagement with another person in which evidence about the other person's state of attention-- possibly also emotion if the questionable literature on empathy emerging early is right-- this can be automatically spread from one person to a social partner. So the hypothesis is that infants are finding other potential social partners from the beginning of life, by looking at things like gaze direction, maybe also infant-directed speech, as a signal that someone else is engaging with them, by interpreting patterns of imitation as a communicative signal that somebody is tracking what they do, and acting in kind with them, and interpreting shifts of attention as-- you're responding to shifts of attention by shifting their own mental states in the same direction. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_21_Josh_Tenenbaum_Computational_Cognitive_Science_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at osw.mit.edu. JOSH TENENBAUM: I'm going to be talking about computational cognitive science. In the brains, minds, and machines landscape, this is connecting the minds and the machines part. And I really want to try to emphasize both some conceptual themes and some technical themes that are complimentary to a lot of what you've seen for the first week or so of the class. That's going to include ideas of generative models and ideas of probabilistic programs, which we'll see a little bit here and a lot more in the tutorial in the afternoon. And on the cognitive side, maybe we could sum it up by calling it common sense. Since this is meant to be a broad introduction-- and I'm going to try to cover from some very basic fundamental things that people in this field were doing maybe 10 or 15 years ago up until the state of the art current research-- I want to try to give that whole broad sweep. And I also want to try to give a bit of a sort of philosophical introduction at the beginning to set this in context with the other things you're seeing in the summer school. I think it's fair to say that there are two different notions of intelligence that are both important and are both interesting to members of this center in the summer school. The two different notions are what I think you could call classifying, recognizing patterns in data, and what you could call explaining, understanding, modeling the world. So, again, there's the notion of classification, pattern recognition, finding patterns in data and maybe patterns that connect data to some task you're trying to solve. And then there's this idea of intelligence as explaining, understanding, building a model of the world that you can use to play on and solve problems with. I'm going to emphasize here notions of explanation, because I think they are absolutely central to intelligence, certainly in any sense that we mean when we talk about humans. And because they get kind of underemphasized in a lot of recent work in machine learning, AI, neural networks, and so on. Like, most of the techniques that you've seen so far in other parts of the class and will continue to see, I think it's fair to say they sort of fall under the broad idea of trying to classify and recognize patterns in data. And there's good reasons why there's been a lot of attention on these recently, particularly coming from the more brain side. Because it's much easier when you go and look in the brain to understand how neural circuits do things like classifying recognized patterns. And it's also, I think with at least certain kinds of current technology, much easier to get machines to do this, right? All the excitement in deep neural networks is all about this, right? But what I want to try to convince you here and illustrate a lot of different kinds of examples is how both of these kinds of approaches are probably necessary, essential to understanding the mind. I won't really bother to try to convince you that the pattern recognition approach is essential, because I take that for granted. But both are essential and, also, that they essentially need each other. I'll try to illustrate a couple of ways in which they really each solve the problems that the other one needs-- so ways in which ideas like deep neural networks for doing really fast pattern recognition can help to make the sort of explaining understanding view of intelligence much quicker and maybe much lower energy, but also ways in which the sort of explaining, understanding view of intelligence can make the pattern recognition view much richer, much more flexible. What do you really mean? What's the difference between classification and explanation? Or what makes a good explanation? So we're talking about intelligence as trying to explain your experience in the world, basically, to build a model that is in some sense a kind of actionable causal model. And there's a bunch of virtues here, these bullet points under explanation. There's a bunch of things we could say about what makes a good explanation of the world or a good model. And I won't say too much abstractly. I'll mostly try to illustrate this over the morning. But like any kind of model, whether it's sort of more pattern recognition classification style or these more explanatory type models, ideas of compactness, unification, are important, right? You want to explain a lot with a little. OK? There's a term if anybody has read David Deutsch's book The Beginning Of Infinity. He talks about this view in a certain form of good explanations as being hard to vary, non-arbitrary. OK. That's sort of in common with any way of describing or explaining the world. But some key features of the models we're going to talk about-- one is that they're generative. So what we mean by generative is that they generate the world, right? In some sense, their output is the world, your experience. They're trying to explain the stuff you observe by positing some hidden, unobservable, but really important, causal actionable deep stuff. They don't model a task. That's really important. Because, like, if you're used to something like, you know, end to end training of a deep neural network for classification where there's an objective function and the task and the task is to map from things you experience and observe in the world to how you should behave, that's sort of the opposite view, right? These are things whose output is not behavior on a task, but whose output is the world you see. Because what they're trying to do is produce or generate explanations. And that means they have to come into contact. They have to basically explain stuff you see. OK. Now, these models are not just generative in this sense, but they're causal. And, again, I'm using these terms intuitively. I'll get more precise later on. But what I mean by that is the hidden or latent variables that generate the stuff you observe are, in some form, trying to get at the actual causal mechanisms in the world-- the things that, if you were then to go act on the world, you could intervene on and move around and succeed in changing the world the way you want. Because that's the point of having one of these rich models is so that you can use it to act intelligently, right? And, again, this is a contrast with a approach that's trying to find and classify patterns that are useful for performing some particular task to detect oh, when I see this, I should do this. When I see this, I should do that, right? That's good for one task. But these are meant to be good for an endless array of tasks. Not any task, but, in some important sense, a kind of unbounded set of tasks where given a goal which is different from your model of the world-- you have your goal. You have your model of the world. And then you use that model to plan some sequence of actions to achieve your goal. And you change the goal, you get a different plan. But the model is the invariant, right? And it's invariant, because it captures what's really going on causally. And then maybe the most important, but hardest to really get a handle on, theme-- although, again, we'll try to do this by the end of today-- is that they're compositional in some way. They consist of parts which have independent meaning or which have some notion of meaning, and then ways of hooking those together to form larger wholes. And that gives a kind of flexibility or extensibility that is fundamental, important to intelligence-- the ability to not just, say, learn from little data, but to be able to take what you've learned in some tasks and use it instantly, immediately, on tasks you've never had any training for. It's, I think, really only with this kind of model building view of intelligence that you can do that. I'll give one other motivating example-- just because it will appear in different forms throughout the talk-- just of the difference between classification and explanation as ways of thinking about the world with thinking about, in particular, planets and just the orbits of objects in the solar system. That could include objects, basically, on any one planet, like ours. But think about the problem of describing the motions of the planets around the sun. Well, there's some phenomena. You can make observations. You could observe them in various ways. Go back to the early stages of modern science when the data by which the phenomena were represented-- you know, things like just measurements of those light spots in the sky, over nights, over years. So here are two ways to capture the regularities in the data. You could think about Kepler's laws or Newton's laws. So just to remind you, these are Kepler's laws. And these are Newton's laws. I won't really go through the details. Probably, all of you know these or have some familiarity. The key thing is that Kepler's laws are laws about patterns of motion and space and time. They specify the shape of the orbits, the shape of the path that the planets trace out in the solar system. Not in the sky, but in the actual 3D world-- the idea that the orbits, the planets, are an ellipse with the sun at one focus. And then they give some other mathematical regularities that describe, in a sense, how fast the planets go around the sun as a function of the size of the orbit and the fact that they kind of go faster at some places and slower at other places in the orbit, right? OK. But in a very important sense, they don't explain why they do these things, right? These are patterns which, if I were to give you a set of data, a path, and I said, is this a possible planet or not-- maybe there's a undiscovered planet. And this is possibly that, or maybe this is some other thing like a comet. And you could use this to classify and say, yeah, that's a planet, not a comet, right? And, you know, you could use them to predict, right? If you've observed a planet over some periods of time in the sky, then you could use Kepler's laws to basically fit an ellipse and figure out where it's going to be later on. That's great. But they don't explain. In contrast, Newton's laws work like this, right? Again, there's several different kinds of laws. There's, classically, Newton's laws of motion. These ideas about inertia and F equals MA and every action produces an equal and opposite reaction, again, don't say anything about planets. But they really say everything about force. They talk about how forces work and how forces interact and combine and compose-- compositional-- to produce motion or, in particular, to produce the change of motion. That's acceleration or the second derivative of position. And then there's this other law, the law of gravitational force, so the universal gravitation, which specifies in particular how you get one particular force. That's the name of the force we call gravity as a function of the mass of the two bodies and the square distance between them and some unknown constant, right? And the idea is you put these things together and you get Kepler's law. You can derive the fact that the planets have to go that way from the combination of these laws of motion and the law of gravitational force. So there's a sense in which the explanation is deeper and that you can derive the patterns from the explanation. But it's a lot more than that. Because these laws don't just explain the motions of the planets around the sun, but a huge number of other things. Like, for example, they don't just explain the orbits of the planets, but also other things in the solar system. Like, you can use them to describe comets. You can use them to describe the moon going around the planets. And you can use them to explain why the moon goes around the Earth and not around the sun in that sense, right? You can use them to explain not just the motions of the really big things in the solar system, but the really little things like, you know, this, and to explain why when I drop this or when Newton famously did or didn't drop an apple or had an apple drop on its head, right? That, superficially, seems to be a very different pattern, right? It's something going down in your current frame of reference. But the very same laws describe exactly that and explain why the moon goes around the Earth, but the bottle or the apple goes down in my current experience of the world. In terms of things like causal and actionable ideas, they explain how you could get a man to the moon and back again or how you could build a rocket to escape the gravitational field to not only get off the ground the way we're all on the ground, but to get off or out of orbiting around and get to orbiting some other thing, right? And it's all about compositionality as well as causality. In order to escape the Earth's gravitational field or get to the moon and back again, there's a lot of things you have to do. But one of the key things you have to do is generate some significant force to oppose, be stronger, than gravity. And, you know, Newton really didn't know how to do that. But some years later, people figured out, you know, by chemistry and other things-- explosions, rockets-- how to do some other kind of physics which could generate a force that was powerful enough for an object the size of rocket to go against gravity and to get to where you need to be and then to get back. So the idea of a causal model, which in this case is the one based on forces, and compositionality-- the ability to take the general laws of forces, laws about one particular kind of force that's generated by this mysterious thing called mass, some other kinds of forces generated by exploding chemicals-- put those all together is hugely powerful. And, of course, this as an expression of human intelligence-- you know, the moon shot is a classic metaphor. Demis used it in his talk. And I think if we really want to understand the way intelligence works in the human mind and brain that could lead to this, you have to go back to the roots of intelligence. You've heard me say this before. And I'm going to do this more by today. We want to go back to the roots of intelligence in even very young children where you already see all of this happening, right? OK. So that's the big picture. I'll just point you. If you want to learn more about the history of this idea, a really nice thing to read is this book by Kenneth Craik. He was an English scientist sort of a contemporary of Turing, also died tragically early, although from different tragic causes. He was, you know, one of the first people to start thinking about this topic of brains, minds, and machines, cybernetics type ideas, using math to describe how the brain works, how the mind might work in a brain. As you see when you read this quote, he didn't even really know what a computer was. Because it was pre-Turing, right? But he wrote this wonderful book, very short book. And I'll just quote here from one of the chapters. The book was called The Nature Of Explanation. And it was sort of both a philosophical study of that and how explanation works in science, like some of the ideas I was just going through, but also really arguing in very common sense and compelling ways why this is a key idea for understanding how the mind and the brain works. And he wasn't just talking about humans. Well, you know, these ideas have their greatest expression in some form, their most powerful expression, in the human mind. They're also important ones for understanding other intelligent brains. So he says here, "one of the most fundamental properties of thought is its power of predicting events. It enables us, for instance, to design bridges with a sufficient factor of safety instead of building them haphazard and waiting to see whether they collapse. If the organism carries a small scale model of external reality into its own possible actions within its head, it is able to try out various alternatives, conclude which is the best of them, react to future situations before they arise, utilize the knowledge of past events in dealing with the present and future and in every way to react in a much fuller, safer, and more competent manner to the emergencies which face it." So he's just really summing up this is what intelligence is about-- building a model of the world that you can manipulate and plan on and improve, think about, reason about, all that. And then he makes this very nice analogy, a kind of cognitive technology analogy. "Most of the greatest advances of modern technology have been instruments which extended the scope of our sense organs, our brains, or our limbs-- such, our telescopes and microscopes, wireless calculating machines, typewriters, motor cars, ships, and airplanes." Right? He's writing in 1943, or that's when the book was published, writing a little before that. Right? He didn't even have the word computer. Or back then, computer meant something different-- people who did calculations, basically. But same idea-- that's what he's talking about. He's talking about a computer, though he doesn't yet have the language quite to describe it. "Is it not possible, therefore, that our brains themselves utilize comparable mechanisms to achieve the same ends and that these mechanisms can parallel phenomena in the external world as a calculating machine can parallel with development of strains in a bridge?" And what he's saying is that the brain is this amazing kind of calculating machine that, in some form, can parallel the development of forces in all sorts of different systems in the world and not only forces. And, again, he doesn't have the vocabulary in English or the math really to describe it formally. That's, you know, why this is such an exciting time to be doing all these things we're doing is because now we're really starting to have the vocabulary and the technology to make good on this idea. OK. So that's it for the big picture philosophical introduction. Now, I'll try to get more concrete with the questions that have motivated not only me, but many cognitive scientists. Like, why are we thinking about these issues of explanation? And what are our concrete handles? Like, let's give a couple of examples of ways we can study intelligence in this form. And I like to say that the big question of our field-- it's big enough that it can fold in most, if not all, of our big questions-- is this one. How does the mind get so much out of so little, right? So across cognition wherever you look, our minds are building these rich models of the world that go way beyond the data of our senses. That's this extension of our sense organs that Craik was talking about there, right? From data that is altogether way too sparse, noisy, ambiguous in all sorts of ways, we build models that allow us to go beyond our experience, to plan effectively. How do we do it? And you could add-- and I do want to go in this direction. Because it is part of how when we relate the mind to the brain or these more explanatory models to more sort of pattern classification models, we also have to ask not only how do you get such a rich model of the world from so little data, but how do you do it so quickly? How do you do it so flexibly? How do you do it with such little energy, right? Metabolic energy is an incredible constraint on computation in the mind and brain. So just to give some examples-- again, these are ones that will keep coming up here. They've come up in our work. But they're key ones that allow us to take the perspective that you're seeing today and bring it into contact with the other perspectives you're seeing in the summer school. So let's look at visual scene perception. This is just a snapshot of images I got searching on Google Images for, I think, object detection, right? And we've seen a lot of examples of these kinds of things. You can go to the iCub and see its trainable object detectors. We'll see more of this when Amnon, the Mobileye guy, comes and tells us about really cool things they've done to do object detection for self-driving cars. You saw a lot of this kind of thing in robotics before. OK. So what's the basic sort of idea, the state of the art, in a lot of higher level computer vision? It's getting a system that learns to put boxes around regions of an image that contains some object of interest that you can label with the word, like person or pedestrian or car or horse, or various parts of things. Like, you might not just put a box around the bicycle, but you might put a box around the wheel, handlebar, seat, and so on. OK. And in some sense, you know, this is starting to get at some aspect of computer vision, right? Several people have quoted from David Marr who said, you know, vision is figuring out what is where from images, right? But Marr meant something that goes way beyond this, way beyond putting boxes in images with single-word labels. And I think you just have to, you know, look around you to see that your brain's ability to reconstruct the world, the whole three-dimensional world with all the objects and surfaces in it, goes so far beyond putting a few boxes around some parts of the image, right? Even put aside the fact that when you actually do this in real time on a real system, you know, the mistakes and the gaps are just glaring, right? But even if you could do this, even if you could put a box around all the things that we could easily label, you look around the world. You see so many objects and surfaces out there, all actionable. This is what this is when I talk about causality, right? Think about, you know, if somebody told me that there was some treasure hidden behind the chair that has Timothy Goldsmith's name on it, I know I could go around looking for the chair. I think I saw it over there, right? And I know exactly what I'd have to do. I'd have to go there, lift up the thing, right? That's just one of the many plans I could make given what I see in this world. If I didn't know that that was Timothy Goldsmith's chair, somewhere over there there's the Lily chair, right? OK. So I know that there's chairs here. There's little name tags on them. I could go around, make my way through looking at that tags, and find the one that says Lily and then, again, know what I have to do to go look for the treasure buried under it, right? That's just one of, really, this endless number of tasks that you can do with the model of the world around you that you've built from visual perception. And we don't need to get into a debate of, you know-- here, we can do this in a few minutes if you want-- about the difference between, like say for example, what Jim DiCarlo might call core object recognition or the kind of stuff that Winrich is studying where, you know, you show a monkey just a single object against maybe a cluttered background or a single face for 100 or 200 milliseconds, and you ask a very important question. What can you get in 100 milliseconds in that kind of limited scene? That's a very important question. But the convergence of visual neuroscience on that problem has enabled us to really understand a lot about the circuits that drive the first initial paths of some aspects of high-level vision, right? But that is really only getting out the classification or pattern detection part of the problem. And the other part of the problem, figuring out the stuff in the world that causes what you see, that is really the actionable part of things to guide your actions the world. We really are still quite far from understanding that at least with those kinds of methods. Just to give a few examples-- some of my favorite kind of hard object detection examples, but ones that show that your brain is really doing this kind of thing even from a single image. You know, it doesn't just require a lot of extensive exploration. So let's do some person detecting problems here. So here's a few images. And let's just start with the one in the upper left. You tell me. Here, I'll point with this, so you can see it on the screen. How many people are in this upper left image? Just tell me. AUDIENCE: Three. AUDIENCE: About 18. JOSH TENENBAUM: About 18? OK. OK. Yeah. That's a good answer, yeah. There are somewhere between 20 or 30 or something. Yeah. That was even more precise than I was expecting. OK. Now, I don't know. This would be a good project if somebody is still looking for a project. If you take the best person detector that you can find out there or that you can build from however much training data you find labeled on the web, how many of those people is it going to detect? You know, my guess is, at best, it's going to detect just five or six-- just the bicyclists in the front row. Does that seem fair to say? Even that will be a challenge, right? Whereas, not only do you have no trouble detecting the bicyclists in the front row, but all the other ones back there, too, even though for many of them all you can see is like a little bit of their face or neck or sometimes even just that funny helmet that bicyclists wear. But your ability to make sense of that depends on understanding a lot of causal stuff in the world-- the three-dimensional structure of the world, the three-dimensional structure of bodies in the world, some of the behaviors that bicyclists tend to engage in, and so on. Or take the scene in the upper right there, how many people are in that scene? AUDIENCE: 350. JOSH TENENBAUM: 350. Maybe a couple of hundred or something. Yeah, I guess. Were you counting all this time? No. That was a good estimate AUDIENCE: No. JOSH TENENBAUM: Yeah, OK. The scene in the lower left, how many people are there? AUDIENCE: 100? JOSH TENENBAUM: 100-something, yeah. The scene in the lower right? AUDIENCE: Zero. JOSH TENENBAUM: Zero. Was anybody tempted to say two? Were you tempted to say two as a joke or seriously? Both are valid responses. AUDIENCE: [INAUDIBLE] JOSH TENENBAUM: Yeah. OK. So, again, how do we solve all those problems, including knowing that one in the bottom-- maybe it takes a second or so-- but knowing that, you know, there's actually zero there. You know, it's the hats, the graduation hats, that are the cues to people in the other scenes. But here, again, because we know something about physics and the fact that people need to breathe-- or just tend to not bury themselves all the way up to the tippy top of their head, unless it's like some kind of Samuel Beckett play or something, Graduation Endgame-- then, you know, there's almost certainly nobody in that scene. OK. Now, all of those problems, again, are really way beyond what current computer vision can do and really wants to do. But I mean, I think, you know, the aspect of scene understanding that really taps into this notion of intelligence, of explaining modeling the causal structure of the world, should be able to do all that. Because we can, right? But here's a problem which is one that motivates us on the vision side that's somewhere in between these sort of ridiculously hard by current standards problems and one that, you know, people can do now. This is a kind of problem that I've been trying to put out there for computer vision community to think about it in a serious way. Because it's a big challenge, but it's not ridiculously hard. OK. So here, this is a scene of airplane full of computer vision researchers, in fact, going to last year's CVPR conference. And, again, how many people are in the scene? AUDIENCE: 20? JOSH TENENBAUM: 20,50? Yeah, something like that. Again, you know, more than 10, less than 500, right? You could count. Well, you can count, actually. Let's try that. So, you know, just do this mentally along with me. Just touch, in your mind, all the people. You know, 1, 2, 3, 4-- well, it's too hard to do it with the mouse. Da, da, da, da, da-- you know, at some point it gets a little bit hard to see exactly how many people are standing in the back by the restroom. OK. But it's amazing how much you can, with just the slightest little bit of effort, pick out all the people even though most of them are barely visible. And it's not only that. It's not just that you can pick them out. While you only see a very small part of their bodies, you know where all the rest of their body is to some degree of being able to predict an act if you needed to, right? So to sort of probe this, here's a kind of little experiment we can do. So let's take this guy here. See, you've just got his head. And though you see his head, think about where the rest of his body is. And in particular, think about where his right hand is in the scene. You can't see his right hand. But in some sense, you know where it is. I'll move the cursor. And you just hum when I get to where you think his right hand is if you could see, like if everything was transparent. AUDIENCE: Yeah. AUDIENCE: Yeah. JOSH TENENBAUM: OK. Somewhere around there. All right, how about let's take this guy. You can see his scalp only and maybe a bit of his shoulder. Think about his left big toe. OK? Think about that. And just hum when I get to where his left big toe is. AUDIENCE: Yeah. AUDIENCE: Yeah. JOSH TENENBAUM: Somewhere, yeah. All right, so you can see we did an instant experiment. You don't even need Mechanical Turk. It's like recording from neurons, only you're each being a neuron. And you're humming instead of spiking. But it's amazing how much you can learn about your brain just by doing things like that. You've got a whole probability distribution right there, right? And that's a meaningful distribution. You weren't just hallucinating, right? You were using a model, a causal model, of how bodies work and how other three-dimensional structures work to solve that problem. OK. This isn't just about bodies, right? Our ability to detect objects, like to detect all the books on my bookshelf there-- again, most of which are barely visible, just a few pixels, a small part of each book, or the glasses in this tabletop scene there, right? I don't really know any other way you can do this. Like, any standard machine learning-based book detector is not going to detect most of those books. Any standard glass detector is not going to detect most of those glasses. And yet you can do it. And I don't think there's any alternative to saying that in some sense, as we'll talk more about it in a little bit, you're kind of inverting the graphics process, In computer science now, we call it graphics. We maybe used to call it optics. But the way light bounces off the surfaces of objects in the world and comes into your eye, that's a causal process that your visual system is in some way able to invert, to model and go from the observable to the unobservable stuff, just like Newton was doing with astronomical data. OK. Enough on vision for now, sort of. Let's go from actually just perceiving this stuff out there in the world to forming concepts and generalizing. So a problem that I've studied a lot, that a lot of us have studied a lot in this field, is the problem of learning concepts and, in particular, one very particular kind of concept, which is object kinds like categories of objects, things we could label with a word. It's one of the very most obvious forms of interesting learning that you see in young children, part of learning language. But it's not just about language. And the striking thing when you look at, say, a child learning words-- just in particular let's say, words that label kinds of objects, like chair or horse or bottle, ball-- is how little data of a certain labels or how little task relevant data is required. A lot of other data is probably used in some way, right? And, again, this is a theme you've heard from a number of the other speakers. But just to give you some of my favorite examples of how we can learn object concepts from just one or a few examples, well, here's an example from some experimental stimuli we use where we just made up a whole little world of objects. And in this world, I can teach you a new name, let's say tufa, and give you a few examples. And, again, you can now go through. We can try this as a little experiment here and just say, you know, yes or no. For each of these objects, is it a tufa? So how about this, yes or no? AUDIENCE: Yes. JOSH TENENBAUM: Here? AUDIENCE: No. JOSH TENENBAUM: Here? AUDIENCE: No. JOSH TENENBAUM: Here? AUDIENCE: No. JOSH TENENBAUM: Here? AUDIENCE: No. JOSH TENENBAUM: Here? AUDIENCE: Yes. JOSH TENENBAUM: Here? AUDIENCE: No. JOSH TENENBAUM: Here? AUDIENCE: Yes. No. No. No. No. Yes. JOSH TENENBAUM: Yeah. OK. So first of all, how long did it take you for each one? I mean, it basically didn't take you any longer than it takes in one of Winrich's experiments to get the spike seeing the face. So you learned this concept, and now you can just use it right away. It's far less than a second of actual visual processing. And there was a little bit of a latency. This one's a little more uncertain here, right? And you saw that in that it took you maybe almost twice as long to make that decision. OK. That's the kind of thing we'd like to be able to explain. And that means how can you get a whole concept? It's a whole new kind of thing. You don't really know much about it. Maybe you know it's some kind of weird plant on this weird thing. But you've got a whole new concept and a whole entry into a whole, probably, system of concepts. Again, several notions of being quick-- sample complexity, as we say, just one or a few examples, but also the speed-- the speed in which you formed that concept and the speed in which you're able to deploy it in now recognizing and detecting things. Just to give one other real world example, so it's not just we make things up-- but, for example, here's an object. Just how many know what this thing is? Raise your hand if you do. How many people don't know what this thing is? OK. Good. So this is a piece of rock climbing equipment. It's called a cam. I won't tell you anything more than that. Well, maybe I'll tell you one thing, because it's kind of useful. Well, I mean, you may or may not even need to-- yeah. This strap here is not technically part of the piece of equipment. But it doesn't really matter. OK. So anyway, I've given you one example of this new kind of thing for most of you. And now, you can look at a complex scene like this climber's equipment rack. And tell me, are there any cams in this scene? AUDIENCE: Yes. JOSH TENENBAUM: Where are they? AUDIENCE: On top. JOSH TENENBAUM: Yeah. The top. Like here? AUDIENCE: No. Next to there. JOSH TENENBAUM: Here. Yeah. Right, exactly. How about this scene, any? AUDIENCE: No. AUDIENCE: [INAUDIBLE] JOSH TENENBAUM: There's none of that-- well, there's a couple. Anyone see the ones over up in the upper right up here? AUDIENCE: Yeah. JOSH TENENBAUM: Yeah. They're hard to see. They're really dark and shaded, right? But when I draw your attention to it, and then you're like, oh yeah. I see that, right? So part of why I give these examples is they show how the several examples I've been giving, like the object concept learning thing, interacts with the vision, right? I think your ability to solve tasks like this rests on your ability to form this abstract concept of this physical object. And notice all these ones, they're different colors. The physical details of the objects are different. It's only a category of object that's preserved. But your ability to recognize these things in the real world depends on, also, the ability to recognize them in very different viewpoints under very different lighting conditions. And if we want to explain how you can do this-- again, to go back to composability and compositionality-- we need to understand how you can put together the kind of causal model of how scenes are formed. That vision is inverting-- this inverse graphics thing-- with the causal model of something about how objects concepts work and compose them together to be able to learn a new concept of an object that you can also recognize new instances of the kind of thing in new viewpoints and under different lighting conditions than the really wonderfully perfect example I gave you here with a nice lighting and nice viewpoint. We can push this to quite extremes. Like, in that scene in the upper right, do you see any cams there? AUDIENCE: Yeah. JOSH TENENBAUM: Yeah. How many are there? AUDIENCE: [INAUDIBLE] JOSH TENENBAUM: Quite a lot, yeah, and, like, all occluded and cluttered. Yeah. Amazing that you can do this. And as we'll see in a little bit, what we do with our object concepts-- and these are other ways to show this notion of a generative model-- we don't just classify things. But we can use them for all sorts of other tasks, right? We can use them to generate or imagine new instances. We can parse an object out into parts. This is another novel, but real object-- the Segway personal thing. Which, again, probably all of you know this, right? How many people have seen those Segways before, right? OK. But you all probably remember the first time you saw one on the street. And whoa, that's really cool. What's that new thing? And then somebody tells you, and now you know, right? But it's partly related to your ability to parse out the parts. If somebody says, oh, my Segway has a flat tire, you kind of know what that means and what you could do, at least in principle, to fix it, right? You can take different kinds of things in some category like vehicles and imagine ways of combining the parts to make yet other new either real or fanciful vehicles, like that C to the lower right there. These are all things you do from very little data from these object concepts. Moving on and then both back to some examples you saw Tomer and I talk about on the first day in our brief introduction and what we'll get to more by the end of today, examples like these. So Tomer already showed you the scene of the red and the blue ball chasing each other around. I won't rehearse that example. I'll show you another scene that is more famous. OK. Well, so for the people who haven't seen it, you can never watch it too many times. Again, like that one, it's just some shapes moving around. It was done in the 1940s, that golden age for cognitive science as well as many other things. And much lower technology of animation, it's like stop-action animation on a table top. But just like the scene on the left which is done with computer animation, just from the motion of a few shapes in this two-dimensional world, you get so much. First of all, you get physics. Let's watch it again. It looks like there's a collision. It's just objects, shapes moving. But it looks like one thing is banging into another. And it looks like they're characters, right? It looks like the big one is kind of bullying the other one. It's sort of backed him up against the wall scaring them off, right? Does you guys see that? The other one was hiding. Now, this one goes in to go after him. It starts to get a little scary, right? Cue the scary music if it was a silent movie. Doo, doo, doo, doo, doo, OK. You can watch the end of it on YouTube if you want. It's quite famous. So I won't show you the end of it. But in case you're getting nervous, don't worry. It ends happily, at least for two of the three characters. From some combination of all your experiences in your life and whatever evolution genetics gave you before you came out into the world, you've built up a model that allows you to understand this. And then it's a separate, but very interesting, question and harder one. How do you get to that point, right? The question of the development of the kind of commonsense knowledge that allows you to parse out just the motion into both forces, you know, one thing hitting another thing, and then the whole mental state structure and the sort of social who's good and who's bad on there-- I mean, because, again, most people when they see this and think about a little bit see some of the characters as good and others as bad. How that knowledge develops is extremely interesting. We're going to see a lot more of the more experiments, how we study this kind of thing in young children, next week. And we'll talk more about the learning next week. We'll see how much of that I get to. What I want to talk about here is sort of general issues of how the knowledge works, how you deploy it, how you make the inferences with the knowledge, and a little bit about learning. Maybe we'll see if we have time for that at the end. But they'll be more of that next week. I think it's important to understand what the models are, these generative models that you're building of the world, before you actually study learning. I think there's a danger if you study learning. Without having the right target of learning, you might be-- to take a classic analogy-- trying to get to the moon by climbing trees. How about this? Just to give one example that is familiar, because we saw this wonderful talk by Demis-- and I think many people had seen the DeepMind work. And I hope everybody here saw Demis' talk. This is just a couple of slides from their Nature paper, where, again, they had this deep Q-network, which is I think a great example of trying to see how far you can go with this pattern recognition idea, right? In a sense, what this network does, if you remember, is it has a bunch of sort of convolutional layers and of fully connected layers. But it's mapping. It's learning a feedforward mapping from images to joystick action. So it's a perfect example of trying to solve interesting problems of intelligence. I think that the problems of video gaming AI are really cool ones. With this pattern classification, they're basically trying to find patterns of pixels in Atari video games that are diagnostic of whether you should move your joystick this way or that way or press the button this way or that way, right? And they showed that that can give very competitive performance with humans when you give it enough training data and with clever training algorithms, right? But I think there's also an important sense in which what this is doing is quite different from what humans are doing when they're learning to play one of these games. And, you know, Demis, I think is quite aware of this. He made some of these points in his talk and, informally, afterwards, right? There's all sorts of things that a person brings to the problem of learning an Atari video game, just like your question of what do you bring to learning this. But I think from a cognitive point of view, the real problem of intelligence is to understand how learning works with the knowledge that you have and how you actually build up that knowledge. I think that at least the current DeepMind system, the one that was published a few months ago, is not really getting that question. It's trying to see how much you can do without really a causal model of the world. But as I think Demis showed in his talk, that's a direction, among many others, that I think they realized they need to go in. A nice way to illustrate this is just to look at one particular video game. This is a game called Frostbite. It's one of the ones down here on this chart, which the DeepMind system did particularly poorly on in terms of getting only about 6% performance relative to humans. But I think it's interesting and informative. And it really gets to the heart of all of the things we're talking about here. To contrast how the mind system as well as other attempts to do sort of powerful scalable deep reinforcement learning, I'll show you another more recent result from a different group in a second. Contrast how those systems learn to play this video game with how a human child might learn to play a game, like that kid over there who's watching his older brother play a game, right? So the DeepMind system, you know, gets about 1,000 hours of game play experience, right? And then it chops that up in various interesting ways with the replay that Demis talked about, right? But when we talk about getting so much from so little, the basic data is about 1,000 hours of experience. But I would venture that a kid learns a lot more from a lot less, right? The way a kid actually learns to play a video game is not by trial and error for 1,000 hours, right? I mean, it might be a little bit of trial and error themselves. But, often, it might be just watching someone else play and say, wow, that's awesome. I'd like to do that. Can I play? My turn. My turn-- and wrestling for the joystick and then seeing what you can do. And it only takes a minute, really, to figure out if this game is fun, interesting, if it's something you want to do, and to sort of get the basic hang of things, at least of what you should try to do. That's not to say to be able to do it. So I mean, unless you saw me give a talk, has anybody played this game before? OK. So perfect example-- let's watch a minute of this game and see if you can figure out what's going on. Think about how you learn to play this game, right? Imagine you're watching somebody else play. This is a video of not the DeepMind system, but of an expert human game player, a really good human playing this, like that kid's older brother. [VIDEO PLAYBACK] [END PLAYBACK] OK. Maybe you've got the idea. So, again, only people who haven't seen before, so how does this game work? So probably everybody noticed, and it's maybe so obvious you didn't even mention it, but every time he hits a platform, there's a beep, right? And the platform turns blue. Did everybody notice that? Right. So it only takes like one or two of those, maybe even just one. Like, beep, beep, woop, woop, and you get that right away. That's an important causal thing. And it just happened that this guy is so good, and he starts right away. So he goes, ba, ba ba, ba, ba, and he's doing it about once a second. And so there's an illusory correlation. And the same part of your brain that figures out the actually important and true causal thing going on, the first thing I mentioned, figures out this other thing, which is just a slight illusion. But if you started playing it yourself, you would quickly notice that that wasn't true, right? Because you'd start off there. Maybe you would have thought of that for a minute. But then you'd start off playing. And very quickly, you'd see you're sitting there trying to decide what to do. Because you're not as expert as this person. And the temperature's going down anyway. So, again, you would figure that out very quickly. What else is going on in this game? AUDIENCE: He has to build an igloo. JOSH TENENBAUM: He has to build an igloo, yeah. How does he build an igloo? AUDIENCE: Just by [INAUDIBLE]. JOSH TENENBAUM: Right. Every time he hits one of those platforms, a brick comes into play. And then what, when you say he has to build an igloo? AUDIENCE: [INAUDIBLE] JOSH TENENBAUM: Yeah. And then what happens? AUDIENCE: [INAUDIBLE] JOSH TENENBAUM: What, sir? AUDIENCE: [INAUDIBLE] JOSH TENENBAUM: Right. He goes in. The level ends, he gets some score for. What about these things? What are these, those little dust on the screen? AUDIENCE: Avoid them. JOSH TENENBAUM: Avoid them. Yeah. How do you know? AUDIENCE: He doesn't actually [INAUDIBLE].. AUDIENCE: We haven't seen an example. JOSH TENENBAUM: Yeah. Well, an example of what? We don't know what's going to happen if he hits one. AUDIENCE: We assume [INAUDIBLE]. JOSH TENENBAUM: But somehow, we assume-- well, it's just an assumption. I think we very reasonably infer that there's something bad will happen if he hits them. Now, do you remember of some of the other objects that we saw on the second screen? There were these fish, yeah. What happens if he hits those? AUDIENCE: He gets more points JOSH TENENBAUM: He gets points, yeah. And he went out of his way to actually get them. OK. So you basically figured it out, right? It only took you really literally just a minute of watching this game to figure out a lot. Now, if you actually went to go and play it after a minute of experience, you wouldn't be that good, right? It turns out that it's hard to coordinate all these moves. But you would be kind of excited and frustrated, which is the experience of a good video game, right? Anybody remember the Flappy Bird phenomenon? AUDIENCE: Yeah JOSH TENENBAUM: Right. This was this, like, sensation, this game that was like the stupidest game. I mean, it seemed like it should be trivial, and yet it was really hard. But, again, you just watch it for a second, you know exactly what you're supposed to do. You think you can do it, but it's just hard to get the rhythms down for most people. And certainly, this game is a little bit hard to time the rhythms. But what you do when you play this game is you get, from one minute, you build that whole model of the world, the causal relations, the goals, the subgoals. And you can formulate clearly what are the right kinds of plans. But to actually implement them in real time, but without getting killed is a little bit harder. And you could say that, you know, when the child is learning to walk there's a similar kind of thing going on, except usually without the danger of getting killed, just danger falling over a little bit. OK. Contrast that learning dynamics-- which, again, I'm just describing anecdotally. One of the things we'd like to do actually as one of our center activities and it's a possible project for students, either in our center or some of you guys if you're interested-- it's a big possible project-- is to actually measure this, like actually study what do people learn from just a minute or two or very, very quick learning experience with these kinds of games, whether they're adults like us who've played other games or even young children who've never played a video game before. But I think what we will find is the kind of learning dynamic that I'm describing. It will be tricky to measure it. But I'm sure we can. And it'll be very different from the kind of learning dynamics that you get from these deep reinforcement networks. Here, this is an example of their learning curves which comes not from the DeepMind paper, but from some slightly more recent work from Pieter Abbeel's group which basically builds on the same architecture, but shows how to improve the exploration part of it in order to improve dramatically on some games, including this Frostbite game. So this is the learning curve for this game you just saw. The black dashed line is the DeepMind system from the Nature paper. And they will tell you that their current system is much better. So I don't know how much better. But, anyway, just to be fair, right? And, again, I'm essentially criticizing these approaches saying, from a human point of view, they're very different from humans. That's not to take away from the really impressive engineering in AI, machine learning accomplishments that these systems are doing. I think they are really interesting. They're really valuable. They have scientific value as well as engineering value. I just want to draw the contrast between what they're doing and some other really important scientific and engineering questions that are the ones that we're trying to talk about here. So the DeepMind system is the black dashed line. And then the red and blue curves are two different versions of the system from Pieter Abbeel's group, which is basically the same architecture, but it just explores a little bit better. And you can see that the x-axis is the amount of experience. It's in training epochs. But I think, if I understand correctly, it's roughly proportional to like hours of gameplay experience. So 100 is like 100 hours. At the end, the DeepQ network in the Nature paper trained up for 1,000. And you're showing there the asymptote. That's the horizontal dashed line. And then this line here is what it does after about 100 iterations. And you can see it's basically asymptoted in that after 10 times as much, there's a time lapse here, right? 10 times as much, it gets up to about there. OK. And impressively, Abbeel's group system does much better. After only 100 hours, it's already twice as good as that system. But, again, contrast this with humans, both what a human would do and also where the human knowledge is, right? I mean, the human game player that you saw in here, by the time it's finished the first screen, is already like up here, so after about a minute of play. Now, again, you wouldn't be able to be that good after a minute. But essentially, the difference between these systems is that the DeepQ network never gets past the first screen even with 1,000 hours. And this other one gets past the first screen in 100 hours, kind of gets to about the second screen. It's sort of midway through the second screen. In this domain, it's really interesting to think about not what happens scientifically. It's really interesting to think about not what happens when you had 1,000 hours of experience with no prior knowledge, because humans just don't do that on this or really any other task that we can study experimentally. But you can study what humans do in the first minute, which is just this blip like right here. I think if we could get the right learning curve, you know, what you'd see is that humans are going like this. And they may asymptote well before any of these systems do. But the interesting human learning part is what's going on in the first minute, more or less or the first hour, with all of the knowledge that you bring to this task as well as how did you build up all that knowledge. So you want to talk about learning to learn and multiple task learning, so that's all there, too. I'm just saying in this one game that's what you can study I think, or that's where the heart of the matter is of human intelligence in this setting. And I think we should study that. So, you know, what I've been trying to do here for the last hour is motivate the kinds of things we should study if we want to understand the aspect of intelligence that we could call explaining, understanding, the heart of building causal models of the world. We can do it. But we have to do it a little bit differently. In a flash, that's the first problem, I started with. How do we learn a generalizable concept from just one example? How can we discover causal relations from just a single observed event, like that, you know, jumping on the block and the beep and so on, which sometimes can go wrong like any other perceptual process? You can have illusions. You can see an accident that isn't quite right. And then you move your head, and you see something different. Or you go into the game, and you realize that it's not just touching blocks that makes the temperature go down, but it's just time. How do we see forces, physics, and see inside of other minds even if they're just a few shapes moving around in two dimensions? How do we learn to play games and act in a whole new world in just under a minute, right? And then there's all the problems of language, which I'm not going to go into, like understanding what we're saying and what you're reading here-- also, versions of these problems. And our goal in our field is to understand this in engineering terms, to have a computational framework that explains how this is even possible and, in particular, then how people do it. OK. Now, you know, in some sense cognitive scientists and researchers, we're not the first people to work on this. Philosophers have talked about this kind of thing for thousands of years in the Western tradition. It's a version of the problem of induction, the problem of how do you know the sun is going to rise tomorrow or just generalizing from experience. And for as long as people have studied this problem, the answer has always been clear in some form that, again, it has to be about the knowledge that you bring to the situation that gives you the constraints that allows you to fill in from this very sparse data. But, again, if you're dissatisfied with that is the answer, of course, you should be. That's not really the answer. That just raises the real problems, right? And these are the problems that I want to try to address in the more substantive part of the morning, which is these questions here. So how do you actually use knowledge to guide learning from sparse data? What form does it take? How can we describe the knowledge? And how can we explain how it's learned? How is that knowledge itself constructed from other kinds of experiences you have combined with whatever, you know, your genes have set up for you? And I'm going to be talking about this approach. And you know, again, really think of this as the introduction to the whole day. Because you're going to see a couple of hours from me and then also from Tomer more hands on in the afternoon. This is our approach. You can give it different kinds of names. I guess I called it generative models, because that's what Tommy likes to call it in CBMM. And that's fine. Like any other approach, you know, there's no one word that captures what it's about. But these are the key ideas that we're going to be talking about. We're going to talk a lot about generative models in a probabilistic sense. So what it means to have a generative model is to be able to describe the joint distribution in some form on your observable data with some kind of latent variables, right? And then you can do probabilistic inference or Bayesian inference, which means conditioning on some of the outputs of that generative model and making inferences about the latent structure, the hidden variables, as well as the other things. But crucially, there's lots of problematic models, but these ones have very particular kinds of structures, right? So the probabilities are not just defined in statisticians terms. But they're defined on some kind of interestingly structured representation that can actually capture the causal and compositional things we're talking about, that can capture the causal structure of the world in a composable way that can support the kind of flexibility of learning and planning that we're talking about. So a key part of how you do this sort of work is to understand how to build probabilistic models and do inference over various kinds of richly structured symbolic representations. And this is the sort of thing which is a fairly new technical advance, right? If you look in the history of AI as well as in cognitive science, there's been a lot of back and forth between people emphasizing these two big ideas, the ideas of statistics and symbols if you like, right? And there's a long history of people sort of saying one of these is going to explain everything and the other one is not going explain very much or isn't even real, right? For example, some of the debates between Chomsky in language in cognitive science and the people who came before him and the people who came after him had this character, right? Or some of the debates in AI in the first wave of neural networks, people like Minsky, for example, and spend some of the neural network people like Jay McClelland initially-- I mean, I'm mixing up chronology there. I'm sorry. But you know, you see this every time whether it's in the '60s or the '80s or now. You know, there's a discourse in our field, which is a really interesting one. I think, ultimately, we have to go beyond it. And what's so exciting is that we are being starting to go beyond it. But there's been this discourse of people really saying, you know, the heart of human intelligence is some kind of rich symbolic structures. Oh, and there's some other people who said something about statistics. But that's like trivial or uninteresting or never going to anything. And then some other people often responding to those first people-- it's very much of a back and forth debate. It gets very acrimonious and emotional saying, you know, no, those symbols are magical, mysterious things, completely ridiculous, totally useless, never worked. It's really all about statistics. And somehow something kind of maybe like symbols will emerge from those. And I think we as a field are learning that neither of those extreme views is going to get us anywhere really quite honestly and that we have to understand-- among other things. It's not the only thing we have to understand. But a big thing we have to understand and are starting to understand is how to do probabilistic inference over richly structured symbolic objects. And that means both using interesting symbolic structures to define the priors for probabilistic inference, but also-- and this moves more into the third topic-- being able to think about learning interesting symbolic representations as a kind of probabilistic inference. And to do that, we need to combine statistics and symbols with some kind of notion of what's sometimes called hierarchical probabilistic models. Or it's a certain kind of recursive generative model where you don't just have a generative model that has some latent variables which then generate your observable experience, but where you have hierarchies of these things-- so generative models for generative models or priors on priors. If you've heard of hierarchical Bayes or hierarchical models and statistics, it's a version of the idea. But it's sort of a more general version of that idea where the hypothesis space and priors for Bayesian inference that, you know, you see in the simplest version of Bayes' rule, are not considered to be just some fixed thing that you write down and wire up and that's it. But rather, they themselves could be generated by some higher level or more abstract probabilistic model, a hypothesis space of hypothesis spaces, or priors on priors, or a generative model for generative models. And, again, there's a long history of that idea. So, for example, some really interesting early work on grammar induction in the 1960s introduced something called grammar grammar, where it used the grammar, a formal grammar, to give a hypothesis base for grammars of languages, right? But, again, what we're understanding how to do is to combine this notion of a kind of recursive abstraction with statistics and symbols. And you put all those things together, and you get a really powerful tool kit for thinking about intelligence. There's one other version of this big picture which you'll hear about both in the morning and in the afternoon, which is this idea of probabilistic programs. So when I would give a kind of tutorial introduction about five years ago-- oops, sorry-- I would say this. But one of the really exciting recent developments in the last few years is in a sense a kind of unified language that puts all these things together. So we can have a lot fewer words on the slide and just say, oh, it's all a big probabilistic program. I mean, that's way simplifying and leaving out a lot of important stuff. But the language of probabilistic programs that you're going to see in little bits in my talks and much more in the tutorial later on is part of why it's a powerful language, or really the main reason. It's that it just gives a unifying language and set of tools for all of these things, including probabilistic models defined over all sorts of interesting symbolic structures. In fact any computable model, any probabilistic model defined on any representation that's computable can be expressed as a probabilistic program. It's where Turing universal computation meets probability. And everything about hierarchical models, generative models for generative models, or priors on priors, hypothesis space by hypothesis space, can be very naturally expressed in terms of probabilistic programs, where basically you have programs that generate other programs. So if your model is a program and it's a probabilistic generative model-- so it's a probabilistic program-- and you want to put down a generative model for generative models that can make learning into inference recursively up in higher levels of abstraction, you just add a little bit more to the probabilistic program. And so it's a very both beautiful, but also extremely useful model building tool kit. Now, there's a few other ideas that go along with these things which I won't talk about. The content of what I'm going to try to do for the rest of the morning and what you'll see for the afternoon is just to give you various examples and ways to do things with the ideas on these slides. Now, there's some other stuff which we won't say that much about. Although, I think Tomer, who just walked in-- hey-- you will talk a little about MCMC, right? And we'll say a little bit about item four, because it goes back to these questions I started off with also that are very pressing. And they're really interesting ones for where neural networks meet up with generative models. You know, just how can we do inference and learning so fast and not just from few examples-- that's what this stuff is about-- but just very quickly in terms of time? So we will say a little bit about that. But all of these, every item, component of this approach, is a whole research area in and of itself. There are people who spend their entire career these days focusing on how to make four work and other people who focus on how to use these kind of rich probabilistic models to guide planning and decision making, or how to relate them to the brain. Any one of these you could spend more than a career on. But what's exciting to us is that with a bunch of smart people working on these and kind of developing common languages to link up these questions, I think we really are poised to make progress in my lifetime and even more in yours. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Seminar_1_Larry_Abbott_Mind_in_the_Fly_Brain.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at osw.mit.edu. LARRY ABBOTT: So I put this slide up so you could walk in and say, I came to this course to learn about high level cognition machines that do amazing things. Surely this guy's not going to talk about a fly. But I am going to talk about a fly. And I will try in the beginning to explain to you why. And hopefully, by the end, we'll see. I won't declare victory at all, and you can tell me if it applies or whatever. The reason I'm talking about the fly is this quote, really. Flies are in all sorts of mushroom bodies. I'm going to talk about a part of the fly brain called the mushroom body. And mushroom bodies are in all sorts of insects. And I kind of like this quote. We more like to say the mushroom body is the soul of the fly. But what I want to point out here is this part of the quote. Flies are not as intelligent as you, but they're intelligence, much of it, comes from this small part of the brain called the mushroom body. If you have free will, they have free will. But unlike you, we can here point to a part of the brain and say, that's where it is. And I'll try to convince you of that as we go on. And so that's why I'm talking about a fly. You can really say there's this small part of the brain called the mushroom body-- I'll show it to you in a second-- where maybe not uniquely, but certainly is a point at which intelligent behavior arises, in which something like free will, whatever makes different flies do different things on different occasions arises. Now, not only that, again, compared to what you might know in a mammal, this is a region of the brain where all the cell types are known. There's genetic control over all the cell types. There is a very good optical level anatomy. And very soon, there'll be an EM level anatomy. So it's a region of the brain that has all of that. And so I thought it would be a good example of where the limits of knowledge of neuroscience are really extended out. You have to put up with it's only a fly. Sure, it's kind of a stupid animal. But I'll show you some behaviors and what this thing can do, and we'll see how it will go. And maybe just some machine learning. So these are the people involved. This is a talk in which the fraction of the work that I did is not zero, but it's very, very small. And so much of it is done in collaboration with Richard Axel and members of his lab, of whom all of these-- you can see the names. Ann is a theory student who worked with me. But an awful lot of the work is done at Janelia in a collaboration with Jerry Rubin's group, and in particular, Yoshi Aso did a huge amount of work here. So I feel just fortunate to be able to kind of correct the commas on the paper. That was my role in this project. So that's the people. OK, so what is the mushroom body all about? This is a diagram of the olfactory system. I should have said at the beginning, not only is it flies, it's olfaction. Two of the most-- can you pick a more boring sense and a more boring creature? So anyway, let's give it a try. So flies have receptors along their antenna. The don't have a nose, but that's where the olfactory receptors are. I'll show you in a schematic a little bit later. There are neurons that receive the odors, then send a signal to this structure. This is called the antennal lobe. And at that point, it gets relayed from these set of neurons to a set of neurons called projection neurons. Those go up here, and they send their signal to the mushroom body. And the mushroom body is this kind of L-shaped thing here that I'll describe in more detail. And then they also send axons to another region of the brain called the lateral horn. I'll come back to that. I think I'll come back to that. Maybe I should say it now so I don't forget. So as you'll see, the mushroom body is going to be responsible for learned behaviors, is responsible for learned behaviors in the fly. And the lateral horn is responsible for innate behaviors. You'll see at the end of the talk, actually, evidence of that. So there's a division which actually occurs in your brain too of the olfactory pathway to an innate pathway and a more flexible learned pathway. So this is a diagram. This is, again, from the Janelia work of real pictures of the stuff. It's overlaid on a fly brain. You don't see the periphery here, but these are the antennal lobes. So this is this relay station in the fly brain. Here is one of the projection neurons you see here. This is the mushroom body. Again, this L-shaped thing. And then it goes backwards. So that purple are the cell bodies of the mushroom body. And here you can see it, again, going to the lateral horn. This is the optic part of the fly brain for doing olfaction. This is obviously a schematic of the stages of olfaction in the fly. These are supposed to be the receptors, so let me start with them. They're called olfactory receptor neurons. There are about 1,000 of them. They come in around 50 types. And here the types have been drawn in colors. And what a type means is a cell that expresses a single receptor molecule. So it will bind to a set of odors, whatever that particular molecule does. And so all of these red guys are virtually identical in their responses. All of the green guys are identical, et cetera. And there are 50 types. And I'll show you in a second, they form about a 30-dimensional representation of olfactory space. Not as high as in your nose. Not nearly as high as in a mouse's nose. But that's what you get. OK, as I mentioned, these project to this structure, which is the antennal lobe in my little diagram. And they have the property that all of the cells of a certain type, in other words expressing a particular receptor, project to the same site, which is called a glomerulus. So you can see all the red guys go to the red one. All the purple guys to the purple one, et cetera. So this is an incredibly precise wiring getting these 50 olfactory signals from the 50 types to a point in space, or a region in space. And that's the point at which the next cells-- so this is obviously the input layer. I'm sort of over here, giving you computer language, if you want, for all this. So at this point, you have the projection neurons pick up the signal. There are about 200 of them, again, in the exact same 50 types, because there are a few projection neurons for each of these different glomeruli. And they send the signal onto the mushroom body, and as I mentioned, the lateral horn, although we won't talk about that whole lot until the end. And this is a one-to-one connection. So every projection neuron-- let's say there are red type projection neurons that just pick up the red signal, send it onward. There are purple type guys-- they're not really called this-- but they accept these 50 signals, maintain them as separate pathways. So what's this thing doing from a sort of computer science point of view? Obviously, it's pooling. So these 1,000 cells are pooling their resources into 50 glomeruli. So you're averaging and you're reducing noise. And there's also a normalization process that goes on here. There are lateral connections here that try to even out the responses so that-- let's say at a fixed concentration, one odor that causes a lot of responses in the receptors and another odor that gives much less response kind of get equaled out here, so that the strong odor doesn't overwhelm the weaker odor. OK, so that's this stage. And I thought I'd show you some of these responses. So here are the 50. This is not all data. This is data plus extrapolation. But the data comes from a beautiful study of Hallem and Carlson. These would be the 50 ORNs, types, so one of each type. And here are 110 orders that were tested. And the responses in firing rate color kind of look like this. You can see they've been graded here. The responses get stronger as you move from left to right. That's just the way they ordered them. And you can see they're quite uneven. So here is a kind of weak responding odors and here are much stronger responding odors. So that's what's coming in. Now, if you look at the PN level-- now, this is not data. This is a model. It's a model really due to Rachel Wilson and members of her lab, but also constructed by Sean Luo and Ann Kennedy in my group. And you can see the argument. So basically, what's happened is these inputs come in and have gone through a model that reproduces what we think the PNs are doing. PNs have not been tested with this whole panel of odors. But you can see the normalization effect. You notice that the activity is spread much more equally across these odors than these odors. And that's reflected in the fact that if you measure by various ways the dimension of this representation, you get about 30. And here it goes up a little bit to 35 because of this kind of equalization effect, and also some decorrelation effect that goes on. So there is that. So what I've described here is sort of the front end of this olfactory system. And it is completely stereotyped. It's a precise wiring. I've described it to you. It's the same in every fly. If you look at two neurons of the same type, they look virtually identical. So this is a hard-wired system. And you would not say there's any free will or intelligence in this system. It's just getting the signal in. And you'll see a little bit more of that later. OK, so what about the next level? The next level is the mushroom body. So these yellow things are the mushroom body neurons. They're called Kenyon cells. There are about 2,000 of them. They come in only seven types. So already, we sense something's happening here. There's something changing about the representation. The representation is getting much higher dimensional. It's something like 1,000 dimensional. So there's a projection out to a high dimensional representation. And this is where the free will comes in. And in anatomical terms, the reason it does is because-- I'll try to persuade you with the data-- that this acts exactly like a random, high dimensional, hidden layer in a machine learning system. So this guy is suddenly a new beast. Within one synapse, the system's gone from completely stereotyped to, you know, crazy. Completely random. I would say there's lots of evidence that it's different in every fly, that every one of these neurons is different. And you've completely given up the stereotypy. So now, how do you get back to sense? Because you've built this beautiful olfactory representation here, and it's as if you've thrown it out. You've just gone crazy. And so now, I put the box around here just to remind us, this is a different beast all of a sudden. And it's a very unusual beast in the fly brain. I'll come back to that. But now you have output. So these yellow neurons, as you'll see, do not leave the mushroom body. They don't send any signal out. They're completely intrinsic to the mushroom body. But there are neurons called mushroom body output neurons that do send the signal out. And again, now it's a new ballgame. First of all, look at the numbers. You've gone from 2,000 neurons to 34 neurons of 21 types. You've got about a 20-dimensional representation. There's been a collapse of the representation. So I would argue you can just see right away from this slide that this is an olfactory representation. This is an olfactory representation cleaned up a bit. This is a crazy, random olfactory representation. This is not an olfactory representation. The dimension is lower than what you started with, so there's no way you can represent the full thing. This is already, somehow, making a decision about olfaction. It's well on the way to a behavior. And again, the great thing about the fly here is that you get there very quickly. If you went to Jim's talk today, I'm sure he talked to you about the long pathway in the visual systems of monkeys, in which these stages take up a good fraction of your brain. These more complicated stages do it. And then it's very difficult to see where this transition is to decisions and things like that. Here, the transition from orderly input representation, sort of retinal-like, to IT-like, if you want, in the visual system occurs in one synapse. And then the return to a decision, a behavior, in another synapse. It's very quick. And I would think of that in computer science terms as a readout layer. As you'll see, it's actually a layered system, but it's the readout. OK, and this system here, it goes back to being completely stereotyped. There are very few neurons per type, if you notice. There are almost as many cell types as there are neurons. And they're the same in every animal. So you've gone from stereotypy at the input stage, a wild and crazy random thing in the middle, and then back to stereotypic to get to the output. Which of course, you have to do, right? Your motor neurons have to go to the right muscles. You can't randomly wire your motor neurons. And thinking occurs between those. Same thing with your retina. It has to be wired to give you the basic visual signal. But between those two extremes, that's where we do our thinking. And as I say, that you can see here, but it's in this one layer, OK? All right, and the key is going to be exactly as in a machine learning system. As you'll see, the key to the whole system is the plasticity and modulation that occurs at that set of connections. There's no evidence that these connections, these connections, and these connections are at least very plastic. They may be modulated a little bit, but the business end of this thing, just as in many machine learning networks, is that the readout unit's being adjusted. And I will come back to that. All right, so here, the mushroom body, it started out as it was in the fly. And as it turns, you'll see why it's called the mushroom body. Yeah, now it looks like a mushroom. So these are the cell bodies. They receive-- you can't really see very well here, but they receive their input right under the mushroom. And then they send axons down. And these axons form the load. So this whole thing is made out of Kenyon cells. That's the Kenyon cells all together forming this structure. How many cells? A couple of thousand. OK, now here you can see one of the projection neurons. Here's where it gets its input from the antenna lobe, goes up to the mushroom body, goes over to the lateral horn. And here you can see-- it's sort of hard to distinguish that neuropil from the cell bodies here, but here you can see that sort of under this layer of cell bodies, it's making its connections. And what I want to stress here is this idea that the projection neurons occur very few cells per cell type. Now, these cells types-- I guess I'm going to get ahead of myself a little bit. But through work at Janelia Farm in particular, there have been these intersectional strategies for expressing various markers in these cells. And they've been supremely successful. So typically, when you get a cell type in this business, it's often two cells, one on each side of the fly. They're perfect mirror images of each other. And they're identical in all flies. So that's what you mean by a cell type. And in much of the fly, there are very few of them per-- this is per side. There will always be an even number. And you can see, there are 50 types, a couple hundred cells. That's part of the specific wiring. Now, if you look at the Kenyon cells, so here they are, there are, as I mentioned, about a couple of thousand of them. And there are up to 600 of them per type. It's much more like what we think of as cortex. We don't think of the cortex as having millions and millions of cell types. Maybe thousands, but there are many, many cells per cell type. And that occurs here. Very small number of cell types relative to the other things. And here's one of them. It's superimposed. So these Kenyon cells, they have their cell body here. They make their connections. So they get the input from the projection neuron, send an axon down, which in some cases splits. And there are five lobes here. There is an alpha lobe or an alpha prime lobe here, a beta lobe or a beta prime lobe here. And then some of them send a single axon down to a gamma lobe. You will see that a little bit more. Then that's it. That's how the mushroom body's built. And if you notice, they do not send anything out of the mushroom body. So the first thing I want to ask, then, is what happens at this junction between the orderly world of the fly, characterized by these PNs, and the wild and random world of the fly, characterized by these Kenyon sets? Here's where they meet in this calyx of the mushroom body. So the experiment that I was involved in the data analysis of came from Richard's lab and was done in the following way. First, a single Kenyon cell-- so here you can see all these cell bodies of Kenyon cells. There are zillions of them up there, thousands of them up there. But one of them has been-- the GFP in one of them has been activated, photoactivated. So you can see this single Kenyon cell comes down. Here it's making connections to get the olfactory input from the projection neurons. And then the axon's going to go down through the floor into the other parts that I showed you. So the trick in this thing-- you can't see very well, but I think I maybe made a circle around one. You can't see it very well, but the terminals of this guy, the postsynaptic terminals, are like claws. They're called claws. And they grab hold of one of the terminals of the projection neurons and make a synapse. So that's how they work. This guy has about seven of these claws, so there are very few connections per Kenyon cell. The trick was for Sophie to inject the die right into the claw here, which is a very tightly sealed little microglomerulus. And that die is taken up by a projection neuron, the one and only one projection neuron that has a terminal there. And here you can see the axon of that projection neuron as it makes terminals in other parts of this calyx and makes connections with other Kenyon cells. So there it is. So that's not the important part. The important part is you can trace back this projection neuron to the antenna lobe and see where it got its input. And now, because the antenna lobe is a totally stereotyped, structured thing, you can now read out. If you know the antennal lobe, you will know that this input is of a certain type. It's from a certain set of receptors. So you know right away that this guy is getting input from receptor number three, or whoever sends projections to that thing. Furthermore, you can repeat this with other terminals of that cell, get a whole lot of projections, and find, essentially, all of the inputs-- sometimes not all, but most of the inputs-- that go to this Kenyon cell and figure out what they are. So in other words, the result of this, without doing EM or all that, is a connectome. It's the connection matrix between the glomeruli, or if you want, these olfactory channels. And there are 50 up around here, plus some hot and cold and some other stuff. But basically, the 50 glomeruli are at the top. And 200 Kenyon cells that were measured going down the side. Not all 2,000 Kenyon cells were measured. These are not measured from the same animal. But you basically get this connectivity matrix. A red little square here means that this connection was found for this Kenyon cell. And a yellow one means a double connection. There were actually two connections between that Kenyon cell and that glomerulus. So there's the matrix. So then my job at this point was say, well, what's the structure of this matrix? And that's a trickier problem. I mean, you look at it by eye, you say, well, it looks random. It just looks like a bunch of dots. But what you have to remember, and I think this is a really important thing to remember in connectomes, is connectomes don't come labeled, all right? So this matrix is arranged in the following way. This is alphabetical, which probably is not of fundamental neuroscience significance. And this is the order in which the cells were measured, which is also probably not of neuroscience significance. So the question is, is there any way to permute the rows and columns of this matrix to get a structure? That's the question you have to answer here. And just let me show you an example of that. So here's a matrix that I've shrunk the size a bit, but it's exactly the same kind of matrix. In fact, it probably looks to you pretty much like the data. It doesn't have the colors, but other than that. So here's a data matrix. But this one I made up. And it turns out, of course, I knew the trick that if you re-sort, if you permute the rows and columns, it looks like this. So just because that looks random does not at all mean there's no structure there. So you have to do a lot of analysis to convince yourself that there's no structure. So one of the first things you could do-- random doesn't mean uniform. So one thing you can do is just sum down the columns here and ask, how many connections does each of the glomeruli make? And it's not uniform. It's quite uneven. Here's the histogram. But really, the question we ask is, is there something more to it? For example, if a Kenyon cell gets one of these inputs, is it more likely to also get one of those inputs? Are there any correlations here? And that's really the question. And we did a whole lot of analysis. And the answer's no. I'm not going to take you through it. That all the tests we could possibly do are completely consistent with just randomly selecting from this probability distribution without independent ID, or whatever it's called. OK, so there are other papers, an earlier paper and a later paper, that essentially come to the same conclusion. What's interesting about the Murthy, Fiete, and Laurent paper is they actually provide some evidence that, in fact, it's different in different animals. This doesn't prove that because this is already taken from different animals. I'm not going to present that evidence. But there is evidence that this is different in different animals. So this looks like a random structure. Now, it's interesting, you guys, why seven connections? Seven seems awfully small to us cortico-centric people. And so why seven? Well, you can do a following little exercise. You can say, suppose that the Kenyon cells only had one connection. Then how many duplicate Kenyon cells would there be? Well, there are only 50 possible types of input, right? There are 50 types of Kenyon grand cell. So if you only have one connection and you're making 2,000 cells, you're going to get tons of repeats, hundreds of thousands of pairs that are identical. So that you would not spread out. Now, you can do this calculation for two connections, three connections, four connections. And it goes down. And if you look at the line where you'd only expect one pair to be the same, the mushroom body-- in fact, if you average, it's between six and seven. It's right in there. The mushroom body is right at the point where you convince yourself that most of the time every cell will be different. And then why go any further? Some of you may know something about the cerebellum. These are like granule cells of the cerebellum. Granule cells in the cerebellum typically have four or five inputs [INAUDIBLE].. They're small cells with claws with very few inputs. And their axons form parallel fibers and then can get connected by Purkinje cells. This system is the same, if you notice. The parallel fibers are forming the trunk and that L-shaped region in the mushroom body. So these are like granule cells. OK, so where are we? So we've got to get a signal out of this thing or it's completely useless, right? So we've got this random signal into this beast. So what do the output neurons look like? There's an output neuron, one of them. And what you might notice is it's going to a very compact region right at the head of this alpha-- I don't know if this is an alpha or an alpha prime. But it's going to one or the other of those lobes. So it's very restricted in its dendrites. And then off it goes carrying the signal wherever it's going. So in fact, I tried to argue earlier that the output neurons in a mushroom body have gone back to this other mode. Very few cells per type. Practically as many types as output cells. And a very small number of cells. And if you took a picture of this cell in another animal, it would look exactly the same. All right, so it was known before this Janelia work, that if you take the mushroom body lobe-- so this is this L-shaped structure at the bottom, or it's really at the front of the mushroom body-- and you peel off the gamma lobe, the gamma would sit there but it would kind of block your view. So it's been peeled off here. So you have this alpha beta lobe. That's one set of axons that have bifurcated. And they come in sort of here and then bifurcate. You have the alpha prime beta lobe bifurcating. And then you have this third gamma lobe. That divides up each of these into five sections. They're numbered like this, but there are five of them, OK? Alpha 1, 2, 3 and beta 1, 2. So each of these guys gets divided into five compartments. And then there's an extra compartment right here called the peduncle, where-- here's the mushroom head. Here's the stock. And then you get this. And right at the base of the stock, there's another one. And so what the Janelia collaboration figured out by genetically targeting these cells very precisely. It is summarized by this picture. So this shows different types of these output neurons in different colors. And what you can see is that they are respecting the compartments. That you have basically one type of output neuron going to each compartment without overlap. And there really is-- there's now EM level data, and they really don't overlap at all. Here's kind of what it looks like in an anatomical diagram. Here are the Kenyon cells. Here's the calyx where they get their input. Here's this L-shaped structure. And these output neurons respect each other's territory. Here is the 16 compartments where they do. And then they're very well organized in another way. You notice these colors here. These colors refer to the transmitter of the output neuron. So all the glutamate guys are over here. All the GABA guys are down here. The cholinergic guys are over here. Now, again, you get this extreme order returning to the system. Here's a theorist's version of this. Here are the compartments, the 16 compartments, 5 per lobe, plus the peduncle, which kind of belongs to the alpha beta lobe. Here they are, the different compartments. And then here are the output cells assigned to them. They're not necessarily one cell per blob here. Sometimes there are a few cells. But basically, those are the cell types. And as I mentioned, they respect-- they only go to one compartment each. And then those are the transmitters, which in this diagram, they don't cluster nicely. But in the other diagram they do. Now, you can ask, why bother to do this? Because there are axons going down. The parallel fibers that the Kenyon cells make, they go that way. So all of these guys have access to exactly the same input. So what would it matter if this guy decided to send a branch over and pick up the axon over there instead of over there? It would make no difference at all. So at this point, you would sort of wonder, why are they respecting these compartments so faithfully? And that's answered in this slide. So these are the output neurons, as you can see, kind of tiling the thing in these compartments. And this is a set of dopamine neurons, which were also genetically isolated in this way and labeled, that target these compartments. And you notice the perfect alignment. So the reason these guys are compartmentalized is so they can be individually modulated by dopamine. And you can see that here. So the dopamine neurons come, again, in slightly more numbers of types. But they align and exactly innervate these compartments without overlap. So the reason the beta 2 guy's in here is so it can be innervated by these particular dopamine neurons. The dopamine neurons are divided into two classes. And again, if we go to the anatomical-- oh, I should mention. If you notice, there were some missing compartments there, but some of the dopamine neurons go to 2, so everybody gets covered. If you go back to this anatomical diagram, what you see is everybody over here gets modulated by these, what are called, PAM dopamine neurons. And they're associated with reward. So when good stuff happens, you hammer this part of the mushroom body. When bad stuff happens, you hammer this part of the mushroom body with a different set of what are called PPL1 dopamine neurons. So again, this beautiful structure. All right, so let me finish elaborating this for you. This is the basic structure. Again, I didn't put it on at first, but some of these guys actually conduct two compartments. So it's not quite true what I said. But basically, that's the output stream from the mushroom body. And then there is a layered system put on. These are the connections, but I kind of depicted it down here more schematically. What you have in this output system is a one layer system down here, a two layer system, a three layer system, and a four layer system. So the output is actually a four layer network, feedforward network. There's no recurrence up to this point. And all of the action occurs on the alpha beta lobe. The alpha beta lobe is responsible for long-term memories. You could think of this as the most sophisticated lobe. Gamma lobe is more for short-term memories, has a simpler readout. Alpha prime beta prime lobe is, to me, kind of God knows what. But probably somebody knows. Anyway, but it's, again, a simpler output system. So it's just a beautiful system. In part, I'm just telling you about it because it's beautiful. OK, so that's the thing. And then these outputs go to various regions. If you don't know the fly brain, you don't care. But what's interesting is now the loop closes. So the regions that receive output from the mushroom body also provide input to the dopamine neurons. So when the mushroom body acts, the dopamine neurons know about it. And when the dopamine neurons react, the mushroom body knows about it. So you have this closed system which finally loops together. And the dopamine system is a reporter of behavior. So it tells the mushroom body what the fly's doing. Or also internal state, how the fly's feeling. I'll show you that in a second. And then these are going to be, obviously, some sort of learned or modulating responses, modulated by this system. You remember that these cannot have any intrinsic meaning, because they've gone through a random stage here. They cannot be assigned meaning without some sort of learning or instruction. So these are learned outputs. And so that's the system. OK, so what does this system do? One of the nice things that's happened in parallel with this anatomical advance that I've been describing is a behavioral advance of what's the mushroom for. I'll start with the classic picture. Mushroom body has been studied for a long, long time. That quote was from 1850. And it's mostly been studied as a classical conditioning system, memory system. You train a fly to be afraid of an odor or to be attracted to an odor through a classical conditioning experiment. And here's a nice, recent version of that that's quite instructive. So in this experiment, what you do is you put one odor in the end of a chamber, a very small chamber that holds a fly. One odor comes in one end. One odor comes in the other end. You pump it out in the middle. And then you track the fly. Flies pace back and forth. And so the fly paces back and forth. But frequently, if it doesn't like odor B, it might come to this central region, say, oh, that's odor B, turn around and go back. And so what you do is count electronically how many times the fly crosses these boundaries. And you can get a measure of its preference for being in the A end or the B end. And this is experiments done in Gero Miesenboeck's laboratory. Now, what you can do then is-- in the first set of experiments that I'll show you, they just look at the innate preference of the fly for an odor, without any training. That's due to the lateral horn, as you'll see. But then you can associate one of the odors, for example, with an electric shock. And presumably, the fly is going to then associate that odor with danger and avoid it. So here's the data. No, first I guess I built a little model. So it's been long suspected how this could work. This is quite easy. You have-- there are the Kenyon cells. Here's a mushroom body, output neuron. Here's a dopamine neuron. So an odor comes along-- that's the conditioned stimulus-- activates some Kenyon cells. Then the unconditioned stimulus comes along, the shock. That activates the dopamine neuron. And where you have activity plus dopamine, for example, you strengthen the synapses. There's evidence that it actually might work by weakening the synapses, but for this diagram, I strengthen the synapses, OK? Then, later on, when the odor comes along, it activates the same set. Now you have these strengthened synapses. You activate the mushroom body output neuron and you send an alarm signal. So that's just classical conditioning with this system. And here are the data showing it works. So first of all, this is the innate preference. What's interesting-- the reason I included this later experiment is because they looked at the innate preference as well as the learned preference. So this is just showing you that this is the distance between the PN activity for these odors. So this is a measure of the discriminability. And they sort of argue that these odors which have a zero preference maybe can't be distinguished by the fly. You don't know that, but any rate, zero means they're equally likely to go to both ends. So these odors, they don't care. But when the odors are quite different, they can have a fairly strong preference for one odor over the other. Now you train, and suddenly you have a strong preference or a strong avoidance, a preference for one over the one that was associated with shock. And now what they did was genetically-- I mentioned that we now have genetic access to all these cells. One of the things you can do is block synaptic transmission from all the Kenyon cells. So you just wipe out the output of the mushroom body. That's done by raising the temperature of these flies. And suddenly, they go right back to their innate preferences as if they'd never learned something. But they still can sense the odor. They still have their innate preference, almost identical to what it was before. But they've lost. Now, if you cool down these flies, they'll pop back up to there. OK, that's classical conditioning. Oh, I know what I was going to mention here. Not in these experiments, but in other experiments, you can replace the electric shock by an activation of the dopamine neuron. So you can show that these avoidance type dopamine neurons really do convey the avoidance message, because you can train them to avoid odor B when all they got was an activation, let's say an optogenetic activation of a dopamine neuron. That's been done tons now. OK, so here's another example. As I said, I think the classic literature on the fly is that. It's classical conditioning studied in zillions of ways, looking at the molecular basis, et cetera, et cetera. But here's some more newer results. Here's one from Daisuke Hattori in Richard's lab. It involves this alpha prime 3 lobe, just to show you what it is. And it has the following features. So it's a little hard to see, maybe, but this is a pulse that shows that the odor, which is MCH here, the odor has been introduced. And here is the response of this alpha prime 3 output neuron. So there you're seeing a response. And if you look across time, that response fades away. It even starts to reverse maybe. So there's an adaptation of this response. You say, big deal. But this adaptation is definitely occurring at the output of the mushroom body. The Kenyon cells are not adapting. It's due to the dopamine, because if you block the dopamine you don't get it. So this is dopamine specific adaptation. But what's more interesting about it is shown here, that if you take an odor response to MCH, adapt it away, but then present a new odor, benzaldehyde, now you get a response again. Then you can adapt the way the response of the benzaldehyde and introduce a third order, you get a thing. So this is an odor-dependent adaptation, which really suggests that the dopamine is specifically weakening the synapses that are active at the time of the dopamine response. So it's like the classical conditioned, but there's no conditioning here. Furthermore, when you adapt one odor and then another, the first order remains adapted. So I sort of see this system-- I've always had trouble with the classical condition experiments, imagining where would a fly get into a situation where it smells an odor and gets a shock, or something like that? But this, you can immediately see, would be very useful. You could adapt to an environment that has a whole set of odors. And then if you come back to that environment and there's a new odor present, you'll immediately know it, because this neuron is going to respond. Whereas if you come back to the identical environment, or without new odors, this won't respond. So this is a neuron to identify unexpected olfactory features of an environment. Or it's one thing it could do. OK, I think I just repeated that because I wanted to say that. Here's another example. This comes from Raphael Cohn and Vanessa Ruta's lab. And this is really the effect of internal state. So I argued for you that these dopamine neurons were reflecting the internal state of the animal. And they have a very beautiful experiment on that. So these are the gamma. I guess I didn't say before, but these are the gamma 2 through gamma 5 compartments that we're going to talk about. What they did was to image the dopamine neurons in those compartments, gamma 2, 3, 4, and 5, and observed that when the fly is-- the fly is in an uncomfortable position, to put it mildly here. It's glued to something, I don't know what. And there's a hole in its head. Other than that, everything's fine. So it's an unhappy fly. You might want to speculate that this is an unhappy fly. And what they observed is while the fly is flailing about and unhappily expressing its unhappiness, these gamma 2 and gamma 3 compartments have dopamine input. And the gamma 4 and gamma 5 don't. But they also observed that every once in a while, the fly just chills out, hangs there, like, oh Christ. And when that happens, it reverses the pattern. Now these are not dopamine activated and these ones are dopamine modulated. Although this is not unequivocal happiness. This is mixed. But then they started manipulating. So here, you take one of these unhappy flies, you give it some sugar, it becomes a happy fly, right? Remember, the red over here, this is a happy fly. This is a sad fly, because they shocked it. So it's clear that this thing is really reading a-- they'll be a little fanciful-- but happy fly, sad fly. You could take a look at these compartments and say, that is one unhappy fly, or one happy fly. Furthermore, now, so that means the internal state is being represented here. But in addition, it has an effect. So this is an experiment where they imaged the output term, the dendrites of the output neuron. So they're looking at transmission from the Kenyon cell to the output neuron. They present an odor. And they activate the dopamine neuron themselves. So when they don't activate the dopamine neuron, this is a measure of synaptic transmission. The odor response here is weak. When they do, it gets much stronger. So now what you have is something-- I mean, I think if I saw this in cortex, I'd go wild-- is a gating effect. You have internal state affecting the thing of this, and it determines where the output goes. So for example, if you're-- I can never remember which one's happy. This is happy, right? So if you're happy, then odors go to one thing, which might say, approach that order. You know, be sort of a little more easy going. If you're unhappy, then an odor response gets relayed out this pathway. And it might tell you to be afraid of all odors, or something like that. Be very cautious. So you start to see in this system the routing of sensory information by internal states. That, to me, is a very exciting thing to see. OK, here's another one-- internal state affects memory. This is from Tanimoto, another collaboration with the Janelia lab. So you can do the same kind of experiment I showed you before with shock, only do it with sweet. So in this case, it's a T maze. A fly comes in this way. And you associate, let's say, odor A with a sweet reward. Then the fly is going to come in here and most of the time go this way. Flies are never 100% performers, but they'll tend to go to odor A because they associate that with sweet, provided they're hungry. I'll come back to the hungry part. So you take a hungry fly, it goes this way. Now, here's what they did that's very clever. It involves these PAM neurons, PAM dopamine neurons. So as you all know, you get the hit of sugar in your mouth right away. And then you get nutritional value, or you get fat or whatever later. And the decoupling of those causes us a lot of problems. So you see that here in this system very, very well, because they take a sugar that's sweet tasting to the fly but can't be digested by the fly, that provides no nutritional value at all. And when they do that-- so there's no nutrition in this sugar-- what they get is a short-term attraction to that odor, enough to buy the next Coke, sort of. It's conveyed through octopamine, so the sweetness activates octopamine. Octopamine activates a certain set of these PAM neurons. That makes changes in the transmission in various compartments. This is not isolated to a unique compartment. Then the next time that odor comes on, it goes out here and it gives you attraction. But it's a short-term attraction, lasts a few hours. Now, if you make the sugar nutritious-- and they do this in a clever way by using another sugar that has no taste to the fly but that the fly can digest, so now this nutritious-- then it activates a fructose receptor, blah, blah, blah, activates a different set of dopamine neurons. That potentiates or depresses-- we don't actually know-- but it changes synapses in the alpha 1 lobe. And now you get a long-term memory. So flies are smarter than people in a way. They'll only do this long-term memory if they then also sense a nutritional benefit to whatever they're eating. OK, so very elegant thing. And then I think this is my final example-- I'll start winding up-- from Scott Waddell's lab, that another feature of this sweet thing is if the fly's not hungry, not surprisingly, it doesn't care anymore. So it's associated odor A with sweet, but now it's well-fed, so who cares. And that's a real effect. So a fed fly will not express this odor preference. But it still has the memory, because if you then starve it, now it will go to odor A. OK, So what Scott Waddell and his group realized was that this was activated through this dopamine neuron. Now, this was done before all this circuitry was derived. But they figured it out that it's this dopamine neuron, because they could activate this dopamine neuron and simulate the fed state, so the fly would ignore the odor. Or they could silence this dopamine neuron and then they would simulate the hungry state and the fly would be attracted to the odor. But now, you notice this circuitry, this is a GABAergic neuron that inhibits the alpha beta lobe output. So this is a perfect pathway by which you could turn off the learned response in this during the fed state. And then you inactivate this pathway, and now you turn it on. So again, an internal state gating a memory. But it's a case-- we don't really know that everything I'm saying is true. One should never assume that. But we now have this pathway. People-- we, I say, but people, we can block this pathway. There's enough known about the circuitry to really work out that what I said is true, that you can start to get at these things. So one-- yeah, I got a minute to do CO2 avoidance. CO2 avoidance is a really cool one. So CO2 is innately repulsive to a fly. It doesn't like CO2. And the reason that is is in a group of flies that are stressed, they release a lot of CO2. So a fly will sense CO2, know there's trouble in the area, and will avoid it. So there's a natural avoidance through the innate pathway to CO2. Now, that's kind of a fatal flaw in the design of the fly, because flies eat rotting fruit that releases tons of CO2. So you don't want to avoid your food source. So what happens-- it's not completely understood. But somehow, the innate system trains this beta 2 pathway to have, in addition to the innate pathway, a learned pathway for CO2 avoidance. And in the hungry state, the fly channels its CO2-- it still has CO2 avoidance. It channels it through this pathway. Then if, at the same time, there are fruit odors or fruit tastes, it can modulate this pathway, shut it down, and turn the CO2 avoidance into a CO2 attraction. So again, you start to see the neural substrates of these really quite complex behaviors sitting right before you in this structure. All right, I'll end here with sort of the lesson for the machine learners. So from a machine learning perspective, this is a simple system. It's not very deep. It's a little bit deep, but not very deep. It contains a random hidden representation. That's not really anything radical. It contains a set of output neurons. It's actually a layered output. Again, nothing very radical. In neuroscience term, it's kind of interesting that it goes from these highly stereotyped to random to highly stereotyped. But really, the lesson here is this is a mediocre machine learning architecture. Not very many units and all that. Where does this thing make up for it? It makes up for it in a stupendous, complicated modulation and plasticity beyond any machine learner's dreams. We don't know about this. I tried to give you hints of the different things it can do. Dopamine can gate. It can induce short-term learning. It can induce long-term learning. It can induce gating of gating, gating of learning. That's what has to be worked out in this system. But there's going to be a really beautiful effect of dopamine acting in many ways on many time scales. And to me, in this system, that's where evolution has put its money, right there. Not in building 20 layers here or something like that. Not in worrying about a whole lot of back prop. This is random. It doesn't appear to be back propped. But in putting huge resources into a rich set of modulatory and plastic processes at these output synapses. And I think in the years to come, they will be worked out. And maybe they'll have implications for machine learning once we know what they are. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_74_Hynek_Hermansky_Auditory_Perception_in_Speech_Technology_Part_1.txt | The following and content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HYNEK HERMANSKY: I'm basically an engineer. And I'm working on speech recognition. And so you may wonder, so what is there to work on? Because you have a cell phone in your pocket, and you speak to it, and Siri answers you and everything. And the whole thing is working very basic principles. You start with a signal. It goes to signal processing. There's some pattern classification. Of course, deep neural nets as usual. And so this recognizes the message. This recognizes what you are saying, so the question is, what is it that is fighter of the boat? Why not keep going, and try just to improve the error rates, and improve them basically step by step? Because we have a good thing going. We have something which is already out there. And it's working. But you may know the answer, so this is, imagine that you are, sort of, skiing or going on a sled. And suddenly, you come somewhere. And you have to start pushing. You don't want to do that, but you do it for the reason, because there may be some-- another kind of slope going out there. And that's the way I feel about the speech recognition. So basically, sometimes we need to push a little bit up and maybe go slightly out of our comfort zone in order to get further. Speech is not what we are-- it's not the thing which we are using for communicating with Siri. Speech is this. Basically, people speak the way I do. They hesitate. There's a lot of fillers, interruptions. And I don't finish the sentences. I speak with a strong accent, and so on. I become excited, and so on, and so on. And we would like to put a machine there instead of the other person. Basically, this is what a speech recognition ultimately is, right? I mean, and actually, if you see what government is supporting, what the big companies are working on, this is what we are worried about. We are worried about the real speech produced by real people in the real communications by speech. And you know, I didn't mention all the disturbing things like noises, and so on, and so on, but we will get into that. So I believe that we don't only need signal processing, and information theory, and machine learning, but we also need the other disciplines. And this is where you guys are coming in. So that's what I believe in. We should be working together, engineering and life sciences working together. At least we should try. We should at least try to be-- we engineers should be try to inspired by life sciences. And as far as inspiration is concerned, I have a story to start with. There was a guy who won the lottery by using numbers 1, 2, 3, 6, 7, 49. And they said, well, this is of course unusual sequence of numbers, so they say, how did you ever get to that? He says, I'm the first child. My mother was my mother's second marriage and my father's third marriage. And I was born on the 6th of July. And of course, 6 by 7 is 49. And that's sometimes I feel, I'm getting this sort of inspiration from you people. I may not get it right. I may not get it right, but as long as it works, I'm happy. You know, I'm not being paid for being smart and being knowledgeable about biology. I'm being, really, paid for making something which works. Anyways, so this is just the warm up. I thought that you will still be drinking a coffee, so I decided to start with a joke. But anyway, but it's an inspiring joke. I mean, it's about inspiration. And I would maybe point out to some of the inspiration points, which I, of course, didn't get it right, but still, it was working. Why do we have audition? Josh already told us-- because we want to survive in this world. I mean, this is a little ferret or whatever, and there is-- he's getting something now. And there is a object. And ferret is worrying, is this something I should be friendly with or I should-- it should be something which I run away. So what is the message in this signal? Is it a danger or is a opportunity? Well, the same way, how do we survive in this world as human beings? So there is my wife who has some message in her head. And so she wants to tell me, eat vegetables, they are good for you, so she's using speech. And speech is actually amazing sort of mechanism for sharing the experiences and for-- actually, without speech, we wouldn't be where we are, I can guarantee you, because that allows us to tell the other people things what they should do without going through much trouble like a ferret with the bird. That we may not have to be eaten, maybe we just die a little early if we don't get this right, if we don't get this message. So she says this thing, and hopefully, I get the message. So this is what speech is about, but I wanted to say, the speech is an important thing, because it allows us to communicate abstract ideas like good for you. And that's sort of not only vegetable, vegetable is saying, but a lot of abstract ideas can be conveyed by speech. And that's why I think it's kind of exciting. Why do we work on machine recognition of speech? Well, first one is just like Edmund Hillary said, because it's there. They are asking, why did you climb Mount Everest? He said, well, because it's there. I mean, it's a challenge, right? Spoken language is one of the most amazing things, I already told you before, of human race so there would be hell if we can't build a machine which understands it. And we don't have an easy time so far yet. In addition, when you are addressing speech, you are really addressing the generic problems which we have in processing of other cognitive signals. And we touched it to some extent during this panel, because, you know, problems which we have in speech, we have the similar problems in perceiving images and perceiving smells. All these cognitive signals, basically, machines are not very good at it. Let's face it. Machines can add 10 billion numbers very quickly, but they cannot tell my grandmother from the monkey, right? I mean, so this is actually important thing. There are also practical applications, obviously-- access to information. Voice interaction with machines extracting information from speech, given how much speech is out there now with-- I don't know how much we are adding every second through the YouTube and that sort of things, but there's a lot of speech out there. It would be good if information can actually extract information from that. And I tell always the students, there is a job security. It's not going to be solved during your lifetime, certainly not during mine. I mean, sort of, if you get into it-- in addition, I mean, I know that this is maybe on YouTube, but also, if you don't like it, you can get fantastic jobs. There is a half of the IBM group ended up on the Wall Street making insane amount of money. So I mean, you know, what skills we should get in recognizing speech, working on the speech, can be also applied in other areas. Obviously it can be applied in vision, and so on, and so on. Speech has been produced to be perceived. Here is Roman Jakobson, the great Harvard, MIT guy, passed away unfortunately. He would be now a hundred and something. He says, we speak in order to be heard, in order to be understood. Speech has been produced to be perceived. And over the millennia of the human evolution, it evolved this way so that it reflects properties of human hearing. And so I'm very much also with Josh. If you build up a machine which recognizes speech, you may be verifying some of the theories of speech perception. And I'll point out that along the way. How do I know that the speech evolved to fit the hearing and not the other way around? I got some big people arguing over that, because they say, you don't know, I mean, basically, but I know. I think. Well, I think that I know, right? Every single organ which is used for speech production is also used for something much more useful, like, sort of typically, eating and breathing. So this is the organs of speech production-- lungs, the lips, teeth, nose, and velum, and so on, and so on. Everything is being used for some life-sustaining functions, including speaking. So I know that it's not the same in hearing. Hearing has evolved to hear, for hearing. Maybe there are some organs of balance, and that sort of thing, but mostly, you do hear. In the speech, everything, what is being used, has been used for-- it's used for something else also, so clearly, we just learned how to speak because we had the appropriate hardware there, and we learned how to use it. So in order to get the message, you use some cognitive aspects, which I won't be talking much about. So you have to use the common language. You have to have some context of the conversation. You have to have some common set of priors, some common experience, and so on, and so on, but mainly what I will be talking about, you need the reliable signal which carries the message, because the message is in the signal. It's also in your head, but the signal supports what is happening in your head. So how much information is in speech signal? This is, I have stolen I believe from George Miller, I think. So if you look at Shannon's theory, I mean, there will be about 80 kilobytes per second. And indeed, you can generate a reasonable signal without being very smart about it just by coding it to 11 bits at 8 kilohertz per second, 80 kilobits per second. This verifies it. So this is how much information might be in the signal. How much is in the speech is actually very-- it's, sort of, not very clear, but at least we can estimate it to some extent. If you say, I would like to transcribe the signal in terms of the speech sounds, phonemes, so there is maybe about 40 to 49 phonemes, or something, 41 phonemes. So if you look at the entropy of that, it comes to about 80 bits per second. So there is three orders of magnitude difference. If you push a little bit further-- indeed, I mean, if you speak with about 150,000 words, that means about 80 bits, 30 words per minute, again, it comes to less than 100 bits. So there's, as I said, there's a number of ways how you can argue about this amount of information. If you start thinking about dependencies between phonemes, it can go as low as 10, 20 bits per second. No question that there is much more information in the signal than it is in useful message which we would like to get out. And we'll get into that. Because what is in the message, there is not only information about the message, but there is a lot of other information. There's information about health of the speaker, about which language the speaker is using, what are-- what emotions, there is who is speaking, speaker-dependent information, what is the mood, and so on, and so on. And there is a lot of noise coming from around, reverberations. We talk about it quite a lot in the morning, all kinds of other noises. So what I call noise in general, I call everything what we don't want besides the signal, which, in speech recognition, is the message. So when I talk about the noise, it can be information about who is speaking, about their emotions, about the fact that maybe my voice is going, and so on, and so on. To my mind, purpose of perception is get the information which carries-- get the signal which carries the desired information and suppress the noise, eliminate the noise. So the purpose of perception, being a little bit vulgar about it, is how to get rid of most of the information very quickly, because otherwise, your brain would go bananas. So you basically want to focus on what you want to hear, and you want to ignore, if possible, everything else. And it's not, of course, easy, but we discuss that again in the morning, about some techniques how to go about it. And I will mention a few more techniques which we are working on. But this a key thing, is, purpose of perception is to get what you need and not to get what you don't need, because otherwise, your brain would be too busy. Speech happens in many-- it's a very simple example. Speech happens in many, many environments. And there is a lot of stuff happening around it, so the very simple example, which I actually used when I was giving a talk to some grandmothers in the Czech Republic is that, what you can already use is the fact that things happen at different levels. And they happen at different frequencies, so perception is selective. Every perceptual mode is selective and attends only to part of the world. You know, we don't hear the radio-- we don't see the radio waves. And we don't hear the ultrasound, and so does all the lower elements, and so on, and so on. So there are different frequencies, different sound intensities are in the first approximation. This is what you may use. If something is too weak, I don't care. If something has too high frequencies, I don't care, and so on, and so on. There are also different spectral and temporal dynamics to speech, which we are learning about that quite a lot. It happens at different locations of the space. Again, this is the reason why we have a spatial directivity. That's why we have two ears. That's why we have a specifically-shaped ears, and so on, and so on. There are also other cognitive aspects, I mean, sort of, like, the selective attention. Again, we talk about it, that people appear to be able to modify the properties of your cognitive processing depending on what you want to listen to. And my friend Nima Mesgarani with Eddie Chang, who was supposed to be here instead of me, just had a major paper in Nature about that, and so on, and so on. There's a number of ways how we can modify the selectivity. We talk about this sharpening the cochlear filters, I mean, depending on the signal from the brain. So speech happens like this, start with a message. You have a linguistic code, maybe 50 bits per second. There are some motor controls. Speech production comes to a speech signal, which has three orders of magnitude larger information content. Through speech perception and cognitive processes, we get, somehow, back to the linguistic code and extract the message, so this is important-- from the small, low bit-rate, to high bit-rate, to the low bit-rate. In production, actually, I don't want to pretend it happens in such a linear way. There are also feedbacks, so there is a feedback from you listen to yourself when you are speaking. You can control how you speak. And you can also actually change the code, because you realize, oh, I should have said it somehow differently. In speech perception, again, we just talked about it, you can, if the message is not getting through, you may be able to tune the system in some ways. You may be changing the things, you know? And you may also use the very mechanical techniques, as I told you, close the window, or walk away. There is also feedback through the dialogue, so from-- between message and message, depending what I'm hearing, I may be asking a different kind of question, so which also modifies the message of the sender. How do we produce speech? So we speak in order to be heard, in order to be understood. So very quickly, I want to go back to something which people already forgot a big way, which is Homer Dudley. He was a great researcher at Bell Laboratories before the Second World War. He retired I think sometime early in '50s. He passed away in the '60s. He was saying message is in the movements of the vocal tract which modulates the carrier, so message in the speech is not in fundamental frequency, it's not the way you are exciting your vocal tract. Message is how you shape the organs of speech production. Proof for that is that you can whisper and you can still understand, so you don't-- how you excite the vocal tract is secondary. How do you generate this audible carrier is secondary. You know, you can use the artificial larynx, so there is this idea, there's a message. A message is being-- goes through modulator into carrier, comes out as speech. So this modulation actually has been used a long time ago, and excuse me for being maybe a little bit simplistic, but it's actually, in some ways, interesting. So this was the speech production mechanism which was developed in some time in the 18th century by the guy Johannes Wolfgang Ritter von Kempelen. And he actually had it right. The only problem is nobody trusted him, because he also invented Mechanical Turk, which was playing the chess. And so he was caught as a cheater, so when he was showing his synthesizer, nobody believed him. But anyways, he was definitely a smart guy. So he used already the principle which is now used. This is a linear model of speech production developed actually before the Second World War, really, again, Bell Laboratories should get the credit. I believe this is stolen from Dudley's paper. So there is a source, and you can change it. It periodic signals out random noise, if you are producing voice signal or unvoice signal. And then there is a resonance control which goes into amplifier, and it produces the speech. So this is the key here, this a key to the point that Dudley developed for this called a voder. And he trained the lady who spent a year or something to play it. It was played like a organ. And she was changing the resonance properties of this system here. And she was creating excitation by pushing on a pitch pedal and switching on the big-- on the wrist bar. And if it works well, we may even be able to make the sound. This is a test. [AUDIO PLAYBACK] - Will you please make the voder say, for our Eastern listeners, good evening, Radio-- HYNEK HERMANSKY: This is a real-- - --audience. HYNEK HERMANSKY: --speech. - Good evening, radio audience. HYNEK HERMANSKY: This is-- - And now, for our Western listeners, say, good afternoon, radio audience. - Good afternoon, radio audience. [END PLAYBACK] HYNEK HERMANSKY: Good enough, right? I mean, sort of-- so already, 1940s, This was the demonstrated at a trade fair. And the lady was trained so well that, in the '50s, when Dudley was retiring, they brought her in. She was already retired a long time ago. And she still could play it. How the speech works-- I mean, maybe-- oh, I wanted to jump this, but anyways, let's go very quickly through that. So this is a speech signal. This is a acoustic signal. It changes in-- this is a sinusoid, high pressure, low pressure, high pressure, low pressure. If you put somewhere in the in the path, some barrier, what happens is you generate a standing wave. A standing wave stands in a space. And there are high pressures, low pressures, high pressures, low pressures. And the frequency depends on the frequency-- I mean, the size of this standing wave depends on the frequency of the signal. So if I put it into something like a vocal tract, which we have here-- so this is a glottis. This is where it gets exciting. This is a very simple model of vocal tract. And here I have a lips. So it takes certain time to provide this through the tube. And the tube will have a maximum velocity at certain point for-- so that it will be resonating in a quarter wavelength of the signal, 3/4 of the wavelength of the signals, in 5/4 of the wavelength of the signal, and so on, and so on. So we can compute which frequencies this tube will be resonating. This is a very simplistic way of producing speech, but you can generate reasonable speech sounds with that. So if we start putting a constriction there somewhere, which emulates the way, very simple the way how we can speak by moving the tongue against the palate or making of constrictions in the speech-- so when the tube is open like this, it resonates at 500, 1,500, 2,500 if the tube is 17 centimeters long, which is a typical length for the vocal tract. So if I put a constriction here, everything moves down because there is such a thing like perturbation theory, which says that, if you are putting a constriction through the point of the maximum velocity, which is, of course, at the opening, all the modes will go down. And as you go on, basically, the whole thing keeps changing. The point is that, almost in every position of the, say, this tongue, all the resonance frequencies are changing, so the whole spectrum is being affected. And that may become useful to explain something later. But we go like this. At the end, you end up, again, in the same frequencies. These are called nomograms. And they will be heavily worked on at the Speech Group at MIT and at Stockholm. So you can see how the formants are moving. And you can see that, for every position of the [INAUDIBLE],, here we have a distance of a constriction from the lips. For every position, we are having all the formants moving, so information about what I'm doing with my vocal organs is actually at all frequencies, all audible frequencies in different ways, but it's there everywhere. It's not a single frequency which would carry information about something. All the audible frequencies carry information about speech. That's important. You can also look at it and you can say, you know, what is the-- where the front cavity resonates, the back cavity resonates. Again, this front cavity resonance may become interesting a little bit later if we get to that. But this is a very simplistic model of the speech production, but pretty much contains all the basic elements of the speech. Point here is that, depending on the length of the vocal tract, even when you keep the constriction at the same position-- this is how long is this front part before the construction is-- so all the resonances are moving, but a shorter vocal tract, like the children's vocal tract, or even in a number of females, they typically have a shorter vocal tract than the males, there's a different number of resonances. So if somebody is telling you the information is in the formants of speech, question it, because it's actually impossible to generate the same speech being two different people, especially having two different lengths of the vocal tract. And we get into it when we talk about the speaker dependencies. Second part is-- of the equation is hearing. So we speak in order to be heard, in order to be understood. And again, thanks to Josh, he spent more than sufficient time explaining you enough what I wanted to say. I will just add something-- some very, very small things. So just to summarize, Josh was telling you the theory works basically like a bank of bandpass filters with a changing frequency and output depending on sound level intensity. There are many caveats to that, but I mean, in a first approximation, I 100% agree this is enough for us to follow for all the rest of the talk. Second thing which Josh mentioned very briefly, but I want to stress it, because it is important, firing rates-- because you know the cochlea communicates with the rest of the system through the firings, through the impulses. Firing rates on the auditory nerve are of the order of 1 kilohertz every one millisecond. But as you go up and up in the system, already here on the colliculus is maybe order of magnitude less. And the order in the level of auditory cortex is 2 orders of magnitude less. So of course, I mean, you know, this is how the brain works. I mean, so here we have from periphery up to cortex, but also, I think it was mentioned very briefly, if you look at it, number of neurons increase more than actually decrease in the firing rates, because if we have-- again, those are just orders of magnitude-- 100,000 neurons maybe on the level of auditory nerve, or colliculus nucleus, and you have 100 million neurons maybe on the level of the brain. And this can become handy later, when, if I get all the way to the end of the talk, I will recall this piece of information. Another thing which was mentioned a number of times is that there are not the only connections from ear, from the periphery to the brain, but there is, by some estimates, many, many more-- I mean, again, I mean the estimates vary, but this is something which I have heard somewhere-- maybe there is maybe almost 10 times more connections going from brain to the ear than from the ear to the brain. And typically, the nature hardly ever builds anything without a reason, so there must be some reason for that. And perhaps we will get into that. Josh didn't talk much about the level of the-- on the cortex. So what's happening on the lower levels, on the periphery? They are just these simple increases of auditory-- of firing rate. There is a certain enhancement of the changes. So at the beginning of the tone-- this is a tone-- the beginning of the tone, there is more firing on auditory nerve. At the end of the tone, there is some deflection. But when you look at a higher level of the cortex, all these wonderful curves, which are sort of increasing with intensity like it would if you had a simple bandpass filter, start looking quite differently. So we measure majority-- what I heard, the majority of the cortical neurons are selective to certain levels. Basically, the firing increases to a certain level, and then it decreases again. And they are, of course, selective at different levels. Also, I mean, you don't see, just these simple things like here, that your firing starts as a tone starts. But they are neurons like that, but there are also neurons which just are interested at the beginning of the signal. There are neurons which are interested in beginning and ends. There are neurons which are interested only at the ends of the signals, and so on, and so on. Receptive fields, again, has been mentioned already before. Just as we have a receptive field in the visual cortex, we have also receptive fields in auditory cortex. Here we don't have the-- here we have a frequency and a time, unlike x and y, receptive fields which are typical, sort of, first thing you are hearing about when you talk about visual perception. They come in all kinds of colors. They tend to be quite long, meaning they can be sensitive for about quarter of the second-- not all of them, but certainly, there are many, many different cortical receptive fields. So some people are suggesting, given the richness of the neurons in auditory cortex, it's a very legal thing to suggest that maybe the sounds are processing in following way, not only that you do the frequency analysis in the cochlea, but then, on the higher levels, you are creating many pictures of the outside world. And then, of course, only the question is here, if answer, this is Murray Sachs' paper from their labs, from Johns Hopkins in 1988. They just simply said pattern recognition, but I believe there is a mechanism which picks up the best streams and leaves out not so useful things, but the concept was here around for a long time. So this was physiology 101. Psychophysics is saying that you play the signals to listeners, and you ask them what they hear. But we want to know what is the response of the organism to the incoming stimulus, so simply, you play the stimulus and you ask what is the response. First thing which you can ask, do you hear something or not? And you already will discover some interesting stuff. Hearing is not equally-sensitive everywhere. It's selective. And it's more sensitive in the area somewhere between 1 and 4 kilohertz. It's much less sensitive at the lower frequencies. This is a threshold. On the threshold level-- here's another interesting thing. If you just apply the signals in different ears, as long as the signals happen within a certain period, about a couple of hundred millisecond, and if couple of hundred millisecond you hear from your ear would be more often, the thresholds are half. Basically, neither of these signals would be heard if you applied only a single one, but when, if you apply both of them, basically you hear them. If you play the signals of different frequencies, if these signals are close enough, close so that, as Josh mentioned about the beats, they happen within one critical band, again, neither blue or green signal would be heard on its own. But if you play them together, you hear them. But if they are further in frequency, you don't hear them. Same thing if these guys are further in time, you wouldn't hear them. So this subthreshold perception actually is kind of interesting. And we will use it. Which we didn't talk much about is that there are obvious ways how you can modify the threshold of hearing. Here we have a target. And since it is higher than threshold of hearing, you hear it. But if you play another sound called masker, you will not hear it, because your threshold basically is modified. It's called the mask threshold. And this part is suddenly not-- this target is not heard. The target can be something useful, but in mp3, it can be pretty annoying, because it's typically noise. You try to figure out how you can mask the noise by the useful signal. You're computing these masked thresholds on the fly. The initial experiment with this, what is called simultaneous masking, was following, and, again, was Bell Labs, Fletcher, and his people. They would figure out what is the threshold of certain frequency without the noise. But then they would put noise around it, and the threshold had to go up, because there was a noise, so there was masking. Then they made a broader noise, and threshold was going up, as you would expect. There was more noise, so you had to make the signal stronger. And you made it to a certain point, when you start making the band of noise too wide, suddenly it's not happening anymore. There is no more masking anymore. That's how they came with this concept of critical band. Critical band is what happens inside the critical band matters, basically, influences the decoding of the signal within a critical band. But if it happens outside the critical band, it doesn't. So essentially, if the signals are far away in frequency, they don't interact with each other. And again, this is a useful thing for speech recognition people. They didn't much realize that this is the main outcome of the masking. Critical bands, I mean, again, I mean, discussions are here, but this is a Bark scale which has been developed in Germany by Zwicker and his colleagues. It's pretty much regarded to be from about 600, 700 hertz up. And it's approximately constant up to 600, 700 hertz. If you talk to Cambridge people, Brian Moore, and that sort of logarithmic it's pretty much regarded to be pretty much everywhere. But not really, but the critical bands, remember, critical bands from the subthreshold things? Again, the critical band is masking. It's starting it with things happen within the critical band. They integrate. They happen outside the-- each of them outside the critical band, they don't interact. Another masking is temporal masking. So you have a signal-- and of course, if you put a mask on it, it's simultaneous masking. You have to make it much-- the signal much stronger in order for you to hear it. But it also influences things in time. This is what is called forward masking. And this is the one which is probably more interesting and more useful. It's also backward masking, when a masker happens after the signal, but it probably has a different origin, more like cognitive rather than earlier. So there is still a masker. You have to make the signal stronger up to a certain point. When the distance between masker and the signal is more than 200 milliseconds, there is like there's no masker anymore. Basically, there is no temporal masking anymore, but it is within this interval of 200 milliseconds. If you make mask stronger, masking is stronger initially, but it also decays faster. And again, it decays after about 200 milliseconds. So whatever happens outside this critical interval, about a couple of hundred millisecond, it doesn't integrate. But if it happens inside this critical interval, that seems to be influencing-- these signals seem to be influencing each other. And again, I mean, you know, I talk about the subthreshold perception-- if there were two tones which happen within 200 millisecond, neither of them would be heard in isolation, but they are heard if you play them together. Another part which is kind of interesting is that loudness depends, of course, on the intensity of the sound, but it doesn't depend linearly on that. It depends with about cubic root, so in order to make a signal twice as loud, you have to make it about 10 times more in intensity for stimuli which are longer than 200 milliseconds. Equal loudness curves, this is a threshold curve, but these equal loudness curve are telling you what the intensity of the sound-- sorry-- would need to be in order to hear it equally loud. So it's saying that, if you have a 40 dB signal at 1 kilohertz, and you want to make it equally loud at 100 hertz, you have to make it 60 dB, and so on. These curves become flatter and flatter, most pronounced at the threshold at lower levels, but they are there. And they are actually kind of interesting and important. Hearing is rather non-linear. Properties depend on the intensity. Speech of course is happening somewhere around here where the hearing is more sensitive. That was the point here. Modulations, again, we didn't talk much about that, but modulations are very important. Since 1923, it's known that hearing is the most sensitive to certain rate of modulations around 4, 5 hertz. These are experiments from Bell Labs repeated number of times. So this is this for a, a modulations. This experiment, what you do is, that you modulate a signal, and change the depth, and change the frequency. And you are asking, do you hear the modulation or don't you hear the modulation? Very interesting-- interesting thing is, if you look at-- again, I mean, I refer to what Josh was telling you in the morning. If you just take one trajectory of the spectrum, you treat it as a time domain signal, remove the mean and compute its Fourier components-- frequency components, they peak somewhere around 4 hertz, just where the hearing is the most sensitive. So hearing is not very sensitive, obviously, to when the signal is non-modulated, but also there is-- there are almost no components in the signal which would be non-modulated, because when I talk to you, I move the mouth. I mean, I change the things. And I change the things about four times a second, mainly. When it comes to speech, you can also compute-- music, you can also figure out what are the natural rhythms in the music. I stole this from, I believe, the Munich group, from [INAUDIBLE]. He played 60 pieces of music. And then he asked people to tap to the rhythm of the music. And this is the histogram of tapping. Most of the people, for most of the music, tapping was about four times a second. This is where the hearing is most sensitive. And this is modulation frequency of this music. So people play music in such a way that we hear it well, that it basically resonates with the natural frequency which we are perceiving. You can also ask the similar thing. So, in speech, you can play the speech sentences. And you ask people to tap in to the rhythm of the sentences. Of course, what gets out is the syllabic rate. And syllabic rate is about 4 hertz. Where is the information in speech? Well, we know what the ear is doing. It analyzes signal into individual frequency bands. We know what Homer Dudley was telling us. When messages and modulations of these frequencies-- as a matter of fact, that was the base of his vocoder. What he also did was that he designed-- actually, it wasn't only him. There was another technique. This one is, kind of, somehow cleaner thing, which is called the spectrograph, which tells you about the spectrum of frequency components of the acoustic signal. So you take the signal. You put it through a bank of bandpass filters. And then here, you basically display, on the z-axis, intensity in each frequency band. This was, I heard, used for listening for German submarines, because they wanted to-- they knew that acoustic signatures were different for friendly submarines and enemy submarines. People listen to it-- for it, but also people realized it may be useful to look at the signal-- acoustic signal somehow. Waveform, it wasn't making all that much sense, but the spectrogram was. Danger there was that the people who were working in speech got hold of it. And then they start, sort of, looking at the spectrograms. And they say, haha, we are seeing the information here. We are seeing the information in waves. The spectrum is changing, because not only that this was the way the origin of the spectrogram was developed, that you were displaying changes in energy in individual frequency bands, but you can also look at this. This when you get to what is called a short-term spectrum of speech. And people said, oh, this short-term spectrum looks different for R than for E, so maybe this is the way to recognize speech. So indeed, I mean, those are two ways of generating the spectrograms. I mean, this was the original one, bank of bandpass filters. And you were displaying the energy as a function of time. This is what your ear is doing. That's what I'm saying. This is not what your ear is doing, that if you take a short segments of the signal, and you compute the Fourier transform, then you display the Fourier transform one frame at a time, but this is the way most of the speech recognition systems work. And I'm suggesting that maybe we should think about other ways. So now we have to deal with all these problems. So we have a number of things coming in in the form of the message with all these chunk around it. And machine recognition of speech would like to transcribe the code which carries the message. This is a typical example of the application of speech recognition. I'm not saying this is the only one. There are attempts to recognize just some key words. There are attempts to actually generate the understanding of what people are saying, and so on, but we would be happy, in most cases, just to transcribe the speech. Speech has been produced to be perceived. We already talked about it. It evolved over millennia to fit the properties of hearing. So this is-- I'm sort of seconding what Josh was saying. Josh was saying, you can learn about the hearing by synthesizing stuff. I'm saying you of learn about hearing by trying to recognize the stuff. So if you put something in and it works, and it supports some theory of hearing, you may be kind of reasonably confident that it was something which has been useful. Actually there's a paper about that, which, of course, I'm co-author, but I didn't want to show that. I thought I would leave this one, but I didn't do it at last minute. Anyways, speech recognition-- speech signal has high bit-rate, recognizer comes in, information, low bit-rate. So what you are doing here, you are trying to reorganize your stuff. You are trying to reduce the entropy. If you are reducing the entropy, you better know what you are doing, because otherwise, you get real garbage. I mean, that's, kind of, like, one of these common sense things, right? So you want to use some knowledge. You have plenty of knowledge in this recognizer. Where does this knowledge come from? We keep discussing it all the time. It came from textbooks, teachers, intuitions, beliefs, and so on. And its a good thing about that, that you can hardwire this knowledge so that you don't have to learn it, relearn it next time based on the data. Of course, problem is that this knowledge may be incomplete, irrelevant, can be plain wrong, because, you know, who can say that whatever teachers tell you, or textbooks tell, or your intuitions or beliefs is always true? Much more often now, what people are using is that they-- that knowledge comes directly from the data. Such knowledge is relevant and unbiased, but the problem is that you need a lot of training data. And it's very hard to get architecture of the recognizer from the data, at least, I don't know quite well how to do it yet. So these are two things. And again, I mean, let me go back to '50s. First knowledge-based recognizer was based on the spectrograms. There was Richard Galt. And he was looking at spectrograms and trying to figure out how this short-term spectrum looked like for different speech sounds. Then he thought he would make this finite state machine, which will generate the text. Needless to say, it didn't work too well. He got beaten by data-driven approach, where people took a high-pass filter speech, low-pass filter speech, displayed energies from these to two channels on, at the time it was oscilloscope. And they tried to figure out what are the patterns. They tried to memorize the patterns, make the templates from the training data. And they tried to match it for the test data was recognized, which was recognizing ten digits. And it was working reasonably well, better than 90% of the time for a single speaker, and so on, and so on. But it's interesting that, already in '50s, the data-driven approach got beat by the knowledge-based approach, because knowledge maybe wasn't exactly what you needed to use. You were looking at the shapes of the short-term spectra basically. Of course, now, we are in 21st century, finally. Number of people say, this is the real way of recognizing speech. You take the signal as it comes with the microphone. You take the neural net. You put a lot of training data, which contain all sources of unwanted variability, basically, what you-- all possible ways of-- you can disturb the speech and comes out the speech message. The key thing is, I'm not saying that this is wrong, but I'm saying that, maybe this is not the most efficient way of going about it, because, in this case, you would have to retrain the recognizer every time. It's a little bit like, sort of, you know, if you look at the hearing system, or the simple animal system-- this is a moth here. Here it changes in acoustic pressure to changes in firing rate. It goes to very simple brain, very small one. You know, this is not the way the human hearing is working. Human hearing is much more complex. And again, Josh already told us a lot about it, so I won't spend much time. The point here is, the human hearing is frequency-selective. It goes through a number of levels. This is very much along the deep net and that sort of things. But still, there is a lot of structure there in the hearing system. So it makes at least some sense to me, if you want to do what people are doing more and more, and there will be a whole special session next week at Interspeech on how to train the things directly from the data, probably you want to have highly-structured environment. You want to have a convoluted pre-processing recursive structures, and so on, and long, short-term memory. Yeah, here are actually some, but all these things are being used. And I think this is the direction to go. But I still argue that maybe it's a better-- there's a better way to go about it. A better way to go about it is that you try first to do some pre-processing of the signal and derive some way of describing the signal more efficiently, using the features, and so on, and so on. Here you put all the knowledge which you possibly may want to-- you already have. This knowledge can be derived from some development data, but you don't want to use directly the speech signal every time you are using-- you don't want to retrain, basically, every time, directly from the speech signal. You want to reserve your training data, the task-specific training data, to deal with the effects of the noise which you don't understand. This is where the machine learning comes. I'm not saying that this is not a part of machine learning, but, I mean, this is-- there are two different things which you are going to do. I was just looking for some support. This one came from Stu Geman from Brown University and his colleagues. Stu Geman is a machine learning person, definitely, but he says, we feel that meat is in the features rather than in the machine learning, because they go overboard, basically, explaining that, if you just rely on machine learning, sure, you have a neural net which can approximate just about any function, given that you have infinite amount of data an infinitely large neural net. And they say, infinite is a kind of not useful engineering concepts. So they feel like that, if representations actually are-- I hope they still feel the same. I didn't talk to them now, but it seems like that there is some support in this notion, what I'm saying. But of course, problem with the features is following, whatever you stripped on the features, this is a bottleneck. Whatever you decide that is not important is lost forever. You will never recover from it, right? Because I'm asking for feature extraction. I'm asking for this emulation of the human perception, which strips out a lot of information, but I still think that we need to do it if we want to design a useful engineering representations. The other problem, of course, is whatever you leave in, the noise, the information which is not relevant to your task, you will have to deal with it later. You will need to train the old machine on that, so you want to be very, very careful. You are walking a thin line here. What is it that I should leave out? What is it that I should keep in? It's always safer to keep a little bit more in, obviously. But this is the goal which we have here. And I wanted to say, features can be designed using development data. And when I'm saying use the development data, design your features and use them. Don't use this development data anymore. We have a lot of data for the designing of good features. And I think that, again, is happening in the field-- good. How the speech recognition was done in 20th century, this is what I know, maybe, the most, so we'll spend some time. And it's still done largely in-- there are some variants of this recognition that's still done. You take the signal. And you derive the features. In the first place, you derive what is called short-term features, so you take short segments of the signal, about 10 to 20 milliseconds. And you derive some features from that. That was in 20th century. Now we are taking much longer segments, but we'll get into that. But you derive it with about 100 hertz sampling every 10 millisecond, so you turn the one-dimensional signal into two-dimensional signal. And here, typically, the first step is the frequency, so those may be-- imagine those are frequency vectors, or something derived from frequency vectors, gets through or stuff like that. Those are just tricks, signal processing tricks which people use-- but one-dimensional to two-dimensional. Next thing is, you estimate the likelihood of the sounds each 10 millisecond. So here, what I-- imagine that here we have different, say, speech sounds, maybe 41 phonemes, maybe 3,000 context-dependent phonemes, and so on, depends on-- but those are parts of speech which makes some sense. And they come, typically, from phonetics theory. And we know that you can generate different words putting phonemes together in different ways, and so on, and so on. So suppose for the simplicity that they are-- there's 41 phonemes. And so if there is a red one, red means that, probably, posterior probability of the-- actually, we need them more. We need the likelihoods rather than posteriors, so with your posteriors, we just divided it by priors to get the likelihoods, so meaning that this phoneme has a high likelihood and white ones don't have a likelihood at this time. So next step is that you do the search on it. This is a painful part. And I won't be spending much time on that. I just want to give you some flavor of this. You try to find the best path through this lattice of the likelihoods. And if you are lucky, the best part, then, is going to present your speech sounds. So then the next thing is only that you look and transcribe to go from phonemic representation from into lexical representation, basically, because you know there is typically one-to-one relations-- Well, should be careful, one-to-one, but it is a relation, known relation between phonemes and the transcription. So we know what has been said. So this is how the speech recognition is done. Talking about this part, I mean, here we have to deal with one major problem, which is, like, the speech doesn't come out this way. It doesn't come out as a sequences of individual speech sounds, but, since I'm talking to you, I'm moving the mouse. I'm moving the mouse continuously. There is a thing that first I can make certain sounds longer, certain sounds shorter. And then I add some noise to it. Finally, because of what is called co-articulation, each target phonemes gets spread in time, so you get a mess. But people say-- sometimes, people like to say, speech recognition, this is our biggest problem. I claim to say this is not the problem. It is a feature. And feature is important, because it comes quite handy later. Hopefully, I will convince you about it. But what we get is a mess, so this is not easy to recognize, right? We have co-articulations. We have speaker dependencies, noise from the environment, and so on, and so on. So the way to deal with it is to recognize that different people may sound different, communication and environment may differ, so the features will be dependent on a number of things, on environmental problems, on who are saying things, and so on. People say same things in different speech. I can speak faster, I can speak slower, still, the message is the same. So we use what is called the Hidden Markov Model, where you try to find such a sequence of the phonemes which optimizes the conditional probability of the model, given the data. And models you generate, on the fly, as many models as possible, actually, an infinite number of models, but, of course, again, you can't do it infinitely, so you do it in some smart ways. And this is being computed through modified Bayes' rule. Modified is because, for one, I mean, you would need a prior probability of the signal, and so on. We don't use that. But also, what we are doing, we are somehow arbitrarily scale the things which are called the language model, because this is a prior probability of the particular utterance. This is the likelihoods coming from the data, combining these two things together, and finding the best match, you get the output which best matches the things. Model parameters are typically derived from the training data. Problem is, how to find the unknown utterance. You don't know what is the form of the model. And you don't know what is the data. So we are dealing with what is called the Doubly stochastic model, a Hidden Markov model. Speech is a sequence-- it's a sequence of hidden states. You don't see this hidden state. And also, you don't know what comes from any state. So it somehow-- so you don't know for sure in which state you are on. You don't know for sure what comes out, but you know that-- well, you know, you assume that this is how the speech looks like. So here I have a picture a little bit. I apologize for being trivial about this, but imagine that you have a string of-- group of people. They are-- some are female, some are male. They are groups of males, groups of females. And each of them says something. He says, hi. And you can measure something. This is a fundamental frequency. You get some measurement out of that, but you don't see them. But what you know is that they interleave, basically. For a while, there is a group of males, then there is a-- then the speech is to a group of female. And then you stay for a while in a group of female, and so on, and so on. So basically-- and you know what is the probability of the fundamental frequency for males, so some distribution. So you know what is the path on the fundamental frequency for females. You know what is the probability of the first group being male. Subsequently, you also know what is the probability of the [AUDIO OUT] Because, to me, the features are the important as I told you, in which we don't need, but we don't want to take out stuff that you may need. I told you that that one important role of the perception is to eliminate some of this information. Basically that's to so, eliminate irrelevant focus on irrelevant stuff. So this is where I feel the properties of perception can come in very strongly, because this is what emulates this basic process of the speech, of the extraction of information [INAUDIBLE].. Especially about the Hidden Markov models, that speech consists of the sequences of sounds and they can be previously different speed, and other things. It's important. But here, we can use a lot of our model. Those features which can be also designed based on the data. And what comes out is probably going to be irrelevant to speech perception, so this is my point for how you can use your engineering to verify our theories of speech perception. We use largely, nowadays, the neural nets to derive the features. So how we do it is that we sort of-- because we know that best set of features are posteriors of the class we want to recognize our speech sounds, maybe it's going to be useful. If you do a good job, actually you can do the reasonable sound. So if you take a signal, you do something processing-- and I will be talking about signal processing quite a lot. But then it goes into neural net, nowadays, deep neural net, and you estimate a posterior use of different speech sounds. And then what comes out, whatever, it's always is the high posterior probability of the phoneme, so we have you do it [INAUDIBLE] sequence of the phoneme. As the classes you can use, directly context independent in this example, small number. You can use context dependent phonemes, which I use quite a lot, because they've tried to optimize this despite that, if the phoneme is produced depends on what happened inside, in the neighborhood, [INAUDIBLE] These posteriors can be directly used with our research. This is the search through the lattice of the likelihoods in recognition. And again, I mean, it's coming back. This was the late 1990, but this is the way that most of this recognizers work. This is the major way now how you do this feature cognition. There's another way, which is called bottleneck or tandem-- we were involved in that too-- which was the way to make the neural nets friendly to people who were used to old generative HMM models, because you basically convert it, your outputs from the posteriors, into some features which your generative HMM model would like for you. What you did for you de- correlated them, you coercionized them so that they have a normal distribution and use it as a features. And bottom line is, if you get the good posteriors, you will get the good features. And we know how to use them. And this is pretty much the mainstream now. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_72_Josh_McDermott_Introduction_to_Audition_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOSH MCDERMOTT: We're going to get started again. So where we stopped, I had just played you some of the results of this text, your synthesis algorithm. We all agreed that they sounded pretty realistic. And so the whole point of this was that this gives plausibility to the notion that you could be representing these textures with these sorts of statistics that you can compute from a model of what we think encapsulates the signal processing in the early auditory system. And, again, I'll just underscore that the sort of cool thing about doing the synthesis is that there's an infinite number of ways in which it can fail, right. And by listening to it and convincing yourself that those things actually sound pretty realistic, you actually get a pretty powerful sense that the representation is sort of capturing most of what you hear when you actually listen to that natural sound, right? And for instance, we could design a classification algorithm that could discriminate between all these different things, right. But the point is that they could-- the representation could still not capture all kinds of different things that you would hear. And by synthesizing, because of the fact that you can potentially fail in any of the possible ways, right. And then listen and observe whether the failure occurs. You get a pretty powerful method. All right. But one thing that you might be concerned about. And this is sort of something that was annoying me, right, is that what we've done here is we've imposed a whole bunch of statistical constraints, right. So we're measuring like this really large set of statistics from the model, right. And then generating things that have the same values of those statistics. So there's this question of whether any set of statistics will do. And so we wonder what would happen if we measured statistics from a model that deviates from what we know about the biology of the ear. So, in particular, you remember that in this model that we set out, there were a bunch of different stages, right. So we've got this initial stage of bandpass filtering. There's the process of extracting the envelope and then applying amplitude compression. And there's this modulation filtering. And in each of these cases, there are particular characteristics of the signal processing of that's explicitly intended to mimic what we see in biology. And so in particular as we noted the kinds of filter banks that you see in biological systems are better approximated by something that's logarithmically spaced than something that's linearly spaced. So we remember-- remember that picture I showed at the start, where we saw that the filters up here were a lot broader than the filters down here, all right. OK, and so we can ask, well, what happens if we swap in a filter bank that's linearly spaced. It's sort of more closely analogous to like an FFT, for instance. Similarly we can ask, well, what happens if we kind of get rid of this nonlinear function here that's applied to the amplitude envelope. And we make the amplitude respond to linear instead. And so we did this. So you can change the auditory model and play the exact same game. So you can measure statistics from that model and synthesize something from those statistics. And then ask whether they sound any different. And so we did an experiment. So we would play people on the original sound. And from that original sound, we have two synthetic versions. One that's generated from the statistics of the model that replicates biology as best we know how. On the other of it that is altered in some way. And we would ask people which of the two synthetic version sounds more realistic? And so there's four conditions in this experiment because we could alter these models in three different ways. So we could get rid of amplitude compression-- that's the first bar. We could make the cochlea linearly spaced. Or we could make the modulation filters linearly spaced. Or we could do all three, and that's the last condition. And so it's being plotted on this axis-- whoops I gave it away-- is the proportion of trials on which people said that the synthesis from the biologically plausible model was more realistic. And so if it doesn't matter what statistics you use, you should be right here at this 50% mark in each of these cases. And as you can see in every case, people actually report them, on average, the synthesis from the biologically plausible model is more realistic. And I'll give you a couple of examples. So here's crowd noise synthesized from the biologically plausible auditory model. [CROWD NOISE] And here's the result of doing the exact same thing but from the altered model. And this is from this condition here where everything is different. And you, a little here, it just kind of sounds weird. [CROWD NOISE] It's kind of garbled in some way. So here's a helicopter synthesized from the biologically plausible model. [HELICOPTER NOISE] And here's from the other one. [HELICOPTER NOISE] Sort of-- doesn't sound like the modulations are quite as precise. And so the notion here is that-- we're initializing this procedure with noise. And so the output is a different sound in every case that are sharing only the statistical properties. And so the statistics that we measure and used to do the synthesis, they define a class of sounds that include the original that, in fact, defines a set as well as a whole bunch of others. And when you run the synthesis, you're generating one of these other examples. And so the notion is that if the statistics are measuring what the brain is measuring, well, then, these examples ought to sound like another example of the original sound. You ought to be generating sort of an equivalence class. And the idea is that when you are synthesizing from statistics of this non-biological model where it's a different set, right? So, again, it's defined by the original. But it contains different things. And they don't sound like the original because they're presumably not defined with the measurements that the brain is making. I just mentioned to you the fact that the procedure will generate a different signal in each of these cases. Here you can see the result of synthesizing from the statistics of a particular recording of waves. These are three different examples. And if you sort of inspect these, you can kind of see that they're all different, right? They sort of have peaks and amplitude in different places and stuff. But on the other hand, they all kind of look the same in a sense that they have the same textural properties, right? And that's what's supposed to happen. And so the fact that you have all of these different signals that have the same statistical properties raises this interesting possibility, which is that if the brain is just representing time average statistics, we would predict that different exemplars of a texture ought to be difficult to discriminate. And so this is the thing that I'll show you about next is an experiment that attempts to test whether this is the case to try to test whether, really, you are, in fact, representing these textures with statistics that summarize their properties by averaging over time. And in doing so, we're going to take advantage of a really simple statistical phenomenon, which is that statistics that are measured from small samples are more variable than statistics measured from large samples. And that's what is exemplified by the graph that's here on the bottom. So what this graph is plotting is the results of an exercise where we took multiple excerpts of a given texture of a particular duration. So you're 40 milliseconds, 80, 160, 320. So we get a whole bunch of different excerpts of that length. And then we measure a particular statistic from that excerpt. So in this case it's a particular cross correlation coefficient for the envelopes of a pair of sub-bands. So we're going to measure that statistic in those different excerpts. And then we're just going to try to see how variable that is across excerpts. And that's summarized with a standard deviation of the statistic. And that's what's plotted here on the y-axis. And so the point is that when the excerpts are short, the statistics are variable. So you measure it in one excerpt and then another and then another. And you don't get the same thing, all right. And so the standard deviation is high. And as the excerpt duration increases, the statistics become more consistent. They converge to the true values of the station underlying stationary process. And so the standard deviation kind of shrinks. All right, and so we're going to take advantage of this in the experiments that we'll do. All right, so first to make sure that or to give plausibility to the notion that people might be able to base judgments on long-term statistics, we ask people to discriminate different textures. So these are things that have different long-term statistics. And so in the experiment, people would hear three sounds, one of which would be from a particular texture like rain. And then two others of which would be different examples of a different texture like a stream. So you'd hear a rain-- stream one-- stream too. And the task was to say which sound was produced by a different source. And so in this case, the answer would be first-- all right. And so we gave people this task. And we manipulated the duration of the excerpts. And so the notion here is that while given this graph, what happens is that the statistics are very variable for short excerpts. And then they become more consistent as the excerpt duration gets longer. And so if you're basing your judgments on the statistics computed across the excerpt, well, then you ought to get better at seeing whether the statistics are the same or different as the excerpt duration gets longer. All right, and so what we're going to plot here is the proportion correct that this task is a function of the excerpt duration. And, indeed, we see that people get better as the duration gets longer. So they're not very good when you give them a really short clip. But they get better and better and as the duration increases. Now, of course, this is not really a particularly exciting result. When you increase the duration, you give people more information. And pretty much on any story, people ought to be getting better, right? But it's at least consistent with the notion that you might be basing your judgments on statistics. Now the really critical experiment is the next one. And so in this experiment, we gave people different excerpts of the same texture. And we asked of them to discriminate them. So again on each trial, you hear three sounds. But they're all excerpts from the same texture. But two of them are identical. So in this case, the last two are physically identical excerpts of, for instance, rain. And the first one is a different excerpt of rain. And so you just have to say, which one is different from the other two? All right, now but maybe the null hypothesis here is what you might expect if you gave this to a computer algorithm that was just limited by sensor noise. And so the notion is that as the excerpt duration gets longer, you're giving people more information with which to tell that this one is different from this one. So maybe if you listen to just the beginning, it would be hard. But as you got more information, it would get easier and easier. If in contrast you think that what people represent when they hear these sounds are statistics that summarize the properties over time. Well, I've just shown you how the statistics converge to fixed values as the duration increases. And so if what people are representing are those statistics, you might paradoxically think that as the duration increases, they would get worse at this task-- all right. And that's, in fact, what we find happened. So people are good at this task when the excerpts are very short on the order of, like, 100 milliseconds. So they can very easily tell you whether you're-- which of the two excerpts is different. And then as the duration gets longer and longer, they get progressively worse and worse. And so we think this is consistent with the idea that when you are hearing a texture, once the texture is a couple of seconds long, you're predominantly representing the statistical properties averaging the properties over time. And you lose access to the details that differentiate different examples of rain so that the exact positions of the rain drops or the clicks of the fire, what have you. Why should people be unable to discriminate two examples of rain? Well, you might think, well, these textures are just homogeneous, right? There's just not enough stuff there to differentiate them. And we know that that's not true, because if you just chop out a little section at random, people can very easily tell you whether it's the same or different, right. So at a local time scale, the details is very easily discriminable. You might also imagine that what's happening is that over time, maybe there's some kind of masking in time. Or that the representation kind of gets blurred together in some strange way. On the other hand, when you give people sounds that have different statistics, you find that they're just great, right. So they get better and better as the stimulus increases. And, in fact, the fact that they continue to get better, that seems to indicate that the detail that is streaming into your ears is being accrued into some representation that you have access to. And so we think what we think is happening is that those details come in. They're incorporated into your statistical estimates. But the fact that you can't tell apart these different excerpts means that there's these details are not otherwise retained. All right, so they're accrued into statistics, but then you lose access to the details on their own. The point is that the result as it stands, I think, provides evidence for a representation of time average statistics. So that is that when the statistics are different, you can tell things are distinct. When they're the same, you can't, and relates to this phenomenon of the variability of statistics as a function of sample size. So a couple control experience that are probably not exactly addressing the question you just raised but maybe are related. So one obvious possibility is that the reason that people are good at the exemplar discrimination here when the excerpts are short and bad here might be the fact that maybe your memory is decaying with time. All right, so the way that we did this experiment originally was that there was a fixed inner stimulus interval, so that was the same. It was couple of hundred milliseconds in every case. And so to tell that this is different from this, the bits that you would have to compare are separated by a shorter time interval than they are in this case, right, where they're separated by a longer time interval. And if you might just imagine that memory decays with time, you might think that would make people worse. So we did a control experiment where we equated the inner onset interval. All right, so that the elapsed time between the stuff that you would have to compare in order to tell whether something was different was the same in the two cases. And that basically makes no difference, right. You're still a lot better when the things are short than when they're long. And we went to pretty great lengths to try to help people be able to do this with these long excerpts. So you might also wonder, well, given that you can do this with the short excerpts. With the short excerpts are really just analogous to the very beginning of these longer excerpts. All right, so why can't you just listen to the beginning? And so we tried to help people do just that. So in this condition here, we put a little gap between the very beginning excerpt and the rest of the thing, right? And we just told people, all right, there's going to be this little thing at the start-- just listen for that. And people can't do that. So we also did it at the end-- so if at the gap at the end. So again, you get this little thing of the same length as you have in the short condition. And this is performance, in this case, some people are good when it's short and a lot worse when it's longer. And the presence of a gap doesn't really seem to make a difference, right. So you have great trouble accessing these things. Another thing that's sort of relevant and related was these experiments that resulted from our thinking about the fact that textures are normally not generated from our synthesis algorithm, but rather from the super position of lots of different sources. And so we wondered what would happen to this phenomenon if we varied the number of sources in a textures. So we actually generated textures by superimposing different numbers of sources. So in one case we did this with speakers. So we wanted to get rid of linguistic effects. And so we used German speech and people that didn't speak German. So it was like a German cocktail party that we're going to generate. So we have one person like this. [FEMALE VOICE 1] [SPEAKING GERMAN] And then 29-- [GROUP VOICE] [SPEAKING GERMAN] All right, room full of people speaking German, all right? And so we do the exact same experiment where we give people different exemplars of these textures. And we ask them to discriminate between them. And so what's plotted here is the proportion correct is a function of duration. Here, we've reduced it to just two durations-- short and long. And there's four different curves corresponding to different numbers of speakers in that signal, right. So the cyan here is what happens with a single speaker. And so with a single speaker, you actually get better at doing this as the duration increases. All right, and so that's, again, consistent with the null hypothesis that when there's more information, you're actually going to be better able to say whether something is the same or different. But as you increase the number of people at the cocktail party-- the density of the signal in some sense-- you can see that performance for the short excerpts doesn't really change. So you retain the ability to say whether these things are the same or different. But there's this huge interaction. And for the long excerpts, you get kind of worse and worse. So impairment at long durations is really specific to textures-- doesn't seem to be present for single sources. To make sure that phenomenon is not really specific to speech, we did the exact same thing with synthetic drum hits. So we just varied the density of a bunch of random drum hits. Like, here's five hits per second. [DRUM SOUNDS] Here's 50. [DRUM SOUNDS] All right, and you see the exact same phenomenon. So for the very sparsest case, you get better as you go from the short excerpts to the long. But then as the density increases, you see this big interaction. And you get selectively worse here for the long duration case. OK, so, again, it's worth pointing out that the high performance with the short excerpts indicate that all the stimuli have discriminable variation. So it's not the case that these things are just like totally homogeneous. And that's why you can't do it. It seems to be a specific problem with retaining temporal detail when the signals are both long and texture-like. OK, so what does this mean? Well, go ahead. Here's the specular framework. And this sort of gets back to these questions about working memory, and so forth. And so this the way that I make sense of this stuff. And each one of these things is pure speculation or almost pure speculation. But I actually think you need all of them to really totally make sense of the results. It's at least interesting to think about. So I think it's plausible that sounds are encoded both as sequences of features and with statistics that average information over time. And I think that the features with which we encode things are engineered to be sparse for typical natural sound sources. But they end up being dense for textures. So the signal comes in-- you're trying to model that with a whole bunch of different features that are in some dictionary you have in your head. And for a signal like speech, your dictionary features include things that might be related to phonemes and so forth. And so for like a single person talking, you end up with this representation that's relatively sparse. It's got sort of a small number of feature activations. But when you get a texture, in order to actually model that signal, you need lots and lots and lots of feature coefficients, all right, in order to actually model the signal. And my hypothesis would be that memory capacity places limits on the number of features that can be retained. All right, so it's not really related to the duration of signal that you can encode, per se. It's on the number of coefficients that you can retain that you need to encode that signal. And the additional thing I would hypothesize is that sound is continuously, and this is critical obligatorily encoded. All right, so this stuff comes into your ears. You're continuously projecting it onto this dictionary of features that you have-- all right. And you've got some memory buffer within which you can hang onto some number of those features. But then once the memory buffer gets exceeded, it gets overwritten. And so you just lose all the stuff that came before. So when your memory capacity for these future sequences is reached, the memory is overwritten by the incoming sound. And the only thing you're left with are these statistics. So I'll give you one last experiment in the texture domain, and then we'll move on. So this is an experiment where we presented people with an original recording, and then the synthetic version that we generated from the the synthesis algorithm. And we just ask them to rate the realism of the synthetic example. And so this is just a summary of the results of that experiment where we did this for 170 different sounds. And this is a histogram of the average realism rating for each of those 170 sounds. And there's just two points to take away from this, right. The first that there's a big peak up here. So they rate it as the realism on a scale of 1 to 7. And so the big peak looks centered at 6 means that the synthesis is working pretty well most of the time. And that's sort of encouraging. But there's this other interesting thing, which is that there's this long tail down here, right. And what this means is that people are telling us that this synthetic signal that is statistically matched to this original recording doesn't sound anything like it. And that's really interesting because it's statistically matched to the original. So it's matched in all these different dimensions, right. And, yet, there's still things that are perceptually missing. And that tells us that there are things that are important to the brain that are not in our model. This is a list of the 15 or so sounds that got the lowest realism ratings. And just to make things easy on you, I'll put labels next to them. Because by and large, they tend to fall into sort of three different categories-- sounds that have some sort of pitch in them. Sounds that have some kind of rhythmic structure. And sounds that have reverberation. And I'll play you these examples, because they're really kind of spectacular failures. Here, I'll play the original version and then the synthetic. [RAILROAD CROSSING SOUNDS] And here's the synthetic. I'm just warning you-- it's bad. [SYNTHETIC RAILROAD CROSSING SOUNDS] Here's the tapping rhythm-- really simple but-- [TAPPING RHYTHM SOUNDS] And the synthetic version. [SYNTHETIC TAPPING RHYTHM SOUNDS] All right. This is what happens if you-- well, this is not going to work very well because we're in an auditorium. But I'll try it anyways. This is a recording of somebody running up a stairwell that's pretty reverberant. [STAIR STEP SOUNDS] And here's this synthetic version. And it's almost as though like the echoes don't get put in the right place, or something. [SYNTHETIC STAIR STEP SOUNDS] And now it sound even worse if this was not an auditorium. Here's what happens with music. [SALSA MUSIC PLAYING] And the synthetic version. [SALSA MUSIC PLAYING] And this is what happens with speech. [MALE VOICE 1] A boy fell from the window. The wife helped her husband. Big dogs can be dangerous. Her-- [INAUDIBLE]. All right, OK, so in some sense, this is the most informative thing that comes out of this whole effort, because, again, it makes it really clear what you don't understand-- right. And in all these cases, it was really not obvious, a priori, that things would be this bad. I actually thought it was sort of plausible that we might be able to capture pitch with some of these statistics. Same with reverb and certainly some of these simple rhythms. I kind of thought that some of the modulation filters responses and their correlations would give this to you. And it's not until you actually test this with synthesis that you realize how bad this is, right? And so this really kind of tells you that there's something very important that your brain is measuring that we just don't yet understand and hasn't been built into our model. So it really sort of identifies the things you need to work on. OK, so just take home messages from this portion of the lecture. So I've argued that sound synthesis is a powerful tool that can help us test and explore theories of addition and that the variables that produce compelling synthesis are things that could plausibly underlie perception. And, conversely, that synthesis failures are things that point the way to new variables that might be important for the perceptual system. I've also argued that textures are a nice point of entry for a real world hearing. I think what's appealing about them is that you can actually work with actual real world-like signals and all of the complexity that at least exists in that domain. And, yet, work with them and generate things that you feel like you can understand. And I've argued that many natural sounds may be recognized with relatively simple statistics of early auditory representation. So the very simplest kinds of statistical representations that you might construct that capture things like the spectrum. Well, that on its own is not really that informative. But if you just go a little bit more complex and into the domain of marginal moments and correlations, you get representations that are pretty powerful. And finally, I gave you some evidence that for textures of moderate length, statistics may be all that we retain. So there are a lot of interesting open questions in this domain. So one of the big ones, I think, is the locus of the time-averaging. So I told you about how we've got some evidence in the lab that the time scale of the integration process for computing statistics is on the order of several seconds. And that's a really long time scale relative to typical time scales in the auditory system. And so where exactly that happens in the brain, I think, is very much an open question and kind of an interesting one. And so we'd like to sort of figure out how to get some leverage on that. There's also a lot of interesting questions about the relationship to scene analysis. So usually you're not hearing a texture in isolation. It's sort of the background to things that, maybe, you're actually more interested in-- somebody talking or what not. And so the relationship between these statistical representations and the extraction of individual source signals is something that's really open, and, I think, kind of interesting. And then these other questions of what kinds of statistics would you need to account for some of these really profound failures of synthesis. OK, so actually one-- I think this might be interesting to people. So I'll just talk briefly about this. And then we're going to have to figure out what to do for the last 20 minutes. But one of the reasons, I think, I was requested to talk about this is because of the fact that there's been all this work on texture in the domain of vision. And so it's sort of an interesting case where we can kind of think about similarities and differences between sensory systems. And so back when we were doing this work-- as I said, this was joint work with Eero Simoncelli. I was a post-doc in his lab at NYU. And we thought it would be interesting to try to turn the kind of standard model of visual texture, which was done by Javier Portia and Eero a long time ago, into sort of the same kind of diagram that I've been showing you. And so we actually did this in our paper. And so this is the one that you've been seeing all talk, right. So you've got a sound-wave form-- a stage of filtering. This non-linearity to extract the envelope and compress it. And then another stage of filtering. And then there are statistical measurements that kind of the last two stages of representation. And this is an analogous diagram that you can make for this sort of standard visual texture model. So we start out with images like beans. There's centers surround filtering of the sort that you would find in the retina or LGN that filters things into particular spatial frequency bands. And so that's what you get here. So these are sub-bands again. Then there's oriented filtering of the sort that you might get via simple cells and V1. So then you get the sub-bands divided up even finer into both spatial frequency and orientation. And then there's something that's analogous to the extraction of the envelope that would give you something like a complex cell. All right, and so this is sort of local amplitude in each of these different sub-bands-- right. So you can see, here, the contrast is very high. And so you get a high response in this particular point in the sub-band. So, again, this is in the dimensions of space. So that's a difference, right, so it's an image. So you got x and y-coordinates instead of time. But, again, there are statistical measurements, and you can actually relate a lot of them to some of the same functional form. So there's marginal moments just like we were computing from sound. In the visual texture model, there's an auto correlation. So that's measuring spatial correlations which we don't actually have in the auditory model. But then these correlations across different frequency channels. So this is across different spatial frequencies to things tuned to the same orientation. And this is across orientations and in the energy domain. So a couple of interesting points to take from this if you just sort of look back and forth between these two pictures. The first is that the statistics that we ended up using in the domain of sound are kind of late in the game. All right, so they're sort of after this non-linear stage that extracts amplitude. Whereas in the visual texture model, the nonlinearity happens here. And there's all these statistics that are being measured at these earlier stages before you're extracting local amplitude. And that's an important difference, I think, between sounds and images and that a lot of the action and sound is in the kind of the local amplitude domain. Whereas there's a lot of important structure and image-- images that has to do with sort of local phase that you can't just get from kind of local amplitude measurements. But at sort of a coarse scale, the big picture is that we think of visual texture as being represented with statistical measurements that average across space. And we've been arguing that sound texture consists of statistical computations that average across time. That said, as I was alluding to earlier, I think it's totally plausible that we should really think about visual texture as something that's potentially dynamic if you're looking at a sheet blowing in the wind or much of people moving in a crowd. And so there might well be statistics in the time domain as well that people just haven't really thought about. OK, so auditory scene analysis is, loosely speaking, the process of inferring events in the world from sound, right. So in almost any kind of normal situation, there is this sound signal that comes into your ears. And that's the result of multiple causal factors in the world. And those can be different things in the world that are making sound. As we discussed, the sound signal also interacts with the environment on the way to your ear. And so both of those things contribute. The classic instantiation of this is the cocktail party problem where the notion is that there would be multiple sources in the world that the signals from the two sources sum together into a mixture that enters your ear. And as a listener, you're usually interested in individual sources, maybe, one of those in particular like what somebody that you care about is saying. And so your brain has to take that mixed signal-- and from that to infer the content of one or more of the sources. And so this is the classic example of an ill-posed problem. And by that I mean that it's ill-posed because many sets of possible sounds add up to equal the observed mixture. So all you have access to is this red guy here, right? And you'd like to infer that the blue signals, which are the true sources that occurred in the world. And the problem is that there are these green signals here, which also add up to the red signal. In fact, there's lots and lots and lots of these, right? So your brain has to take the red signal and somehow infer the blue ones. And so this is analogous to me telling you, x plus y equals 17-- please solve for x. And so, obviously, if you got this on a math test, you would complain because there is not a unique solution, right. That you could have 1 in 16, and 2 in 15, and 3 in 14, and so on and so forth, right? But that's exactly the problem that your brain is solving all the time every day when you get a mixture of sounds. And the only way that you can solve problems of these sorts is by making assumptions about the sound sources. And the only way that you would be able to make assumptions about sound sources is if real-world sound sources have some degree of regularity. And in fact, they do. And one easy way to see this is by generating sounds that are fully random. And so the way that you would do this is you would have a random number generator-- you would draw numbers from that. And each of those numbers would form a particular sample and a sound signal. And then you could play that and listen to it, right. And so if you did that procedure, this is what you would get. [SPRAY SOUNDS] All right, so those are fully random sound signals. And so we could generate lots and lots of those. And the point is that with that procedure, you would have to sit there generating these random sounds for a very, very long time before you got something that sounded like a real world sounds, right? Real world sounds are like this. [ENGINE SOUND] Or this-- [DOOR BELL SOUND] Or this-- [BIRD SOUND] Or this-- [SCRUBBING SOUND] All right, so the point is that the set of sounds that occur in the world are a very, very, very small portion of the set of all physically realizable sound-wave forms. And so the notion is that that's what enables you to hear it. It's the fact that you've instantiated the fact that the structure of real world tones is not random. And such that when you get a mixture of sounds, you can actually make some good guesses as to what the sources are. All right, so we rely on these regularities in order to hear. So one intuitive view of inferring a target source from a mixture like this is that you have to do at least a couple things. One is to determine the grouping of the observed elements and the sound signal. And so what I've done here is for each of these-- this is that cocktail party problem demo that we saw that we heard at the start. So we've got one speaker-- two, three, and then seven. And in the spectrograms, I've coded the pixels either red or green, where the pixels are coded red if they come from something other than the target source, right. So this stuff up here is coming from this additional speaker. And then the green bits are the pixels in the target signal that are masked by the other signal. Or the other signal actually has higher intensity. And so one notion is that, well, you have to be able to tell that the red things actually don't go with the gray things. But then you also need to take these parts that are green, where the other source is actually swamping the thing you're interested in, and then estimate the content of the target source. That's at least a very sort of naive intuitive view of what has to happen. And in both of these cases, the only way that you can do this is by taking advantage of statistical regularities in sounds. So one example of irregularity that we think might be used to group sound is harmonic frequencies. So voices and instruments and certain other sounds produce frequencies that are harmonics, i.e., multiples of a fundamental. So here's a schematic power spectrum of somebody of what might come out of your vocal chords. So there's the fundamental frequency here. And then all the different harmonics. And they exhibit this very regular structure. Here, similarly, this is A440 on the oboe. [OBOE SOUND] So the fundamental frequency is 440 hertz. That's concert A. But if you look at the power spectrum of that signal, you get all of these integer multiples of that fundamental. All right, and so the way that this happens in speech is that there are these-- your vocal chords, which open and closed in this periodic manner. They generate a series of sound pulses. And in the frequency domain, that translates to harmonic structure. Not going to go through this in great detail. Hynek's going to tell you about speech. All right, and so there's some classic evidence that your brain uses harmonicity as a grouping cue, which is that if you take a series of harmonic frequencies and you mistune one of them, your brain typically causes you to hear that as a distinct sound source once the mistuning becomes sufficient. And here's just a classic demo of that. [MALE VOICE 2] Demonstration 18-- isolation of a frequency component based on mistuning. You are to listen for the third harmonic of a complex tone. First, this component is played alone as a standard. Then over a series of repetitions, it remains at a constant frequency while the rest of the components are gradually lowered as a group in steps of 1%. [BEEPING SOUNDS] [MALE VOICE 2] Now after two-- OK, and so what you should have heard-- and you can tell me whether this is the case or not-- is that as this thing is mistuned, at some point, you actually start to hear, kind of, two beeps. All right, there's the main tone and then there's this other little beep, right. And if you did it in the other direction, it would then reverse. OK, so one other consequence of harmonicity is-- and somebody was asking about this earlier-- is that your brain is able to use the harmonics of the sound in order to infer its pitch. So the pitch that you hear when you hear somebody talking is like a collective function of all the different harmonics. And so one interesting thing that happens when you mistune a harmonic is that for very small mistunings, that initially causes a bias in the perceived pitch. And so that's what's plotted here. So this is a task where somebody hears this complex tone that has one of the harmonics mistuned by a little bit. And then they hear another complex tone. And they have to adjust the pitch of the other one until it sounds the same. All right, and so what's being plotted on the y-axis in this graph is the average amount of shift in the pitch match as a function of the shift in that particular harmonic. And for very small mistunings of a few percent, you can see that there's this linear increase in the proceeded pitch. All right, so the mistune that harmonic causes the pitch to change. But then once the mistuning exceeds a certain amount, you can actually see that the effect reverses. And the pitch shift goes away. And so we think what's happening here is that the mechanism in your brain that is computing pitch from the harmonics somehow realizes that one of those harmonics is mistuned and is not part of the same thing. And so it's excluded from the computation of pitch. So the fact that you segregated those sources then somehow happened prior to or in at the same time as the calculation of the pitch. Here's another classic demonstration of sounds variations related to harmonicity. This is called the Reynolds-McAdams Oboe-- some collaboration between Roger Reynolds and Steve McAdams. There's a complex tone-- and what's going to happen here is that the even harmonics-- two, four, six, eight, et cetera, will become frequency modulated in a way that's coherent. And so, initially, you'll hear this kind of one thing. And then it will sort of separate into these two voices. And it's called the oboe because the oboe is an instrument that has a lot of power at the odd harmonics. And so you'll hear something that sounds like an oboe along with something that, maybe, is like a voice that has vibrato. [OBOE AND VIBRATO SOUNDS] Is that work for everybody? So all these things that are being affected in kind of interesting ways by the reverb in this auditorium, which will-- yeah, but that mostly works. So we've done a little bit of work trying to test whether the brain uses harmonicity to segregate actual speech. And so very recently, it's become possible to manipulate speech and change its harmonicity. And I'm not going to tell you in detail how this works. But we can resynthesize speech in ways that are either harmonic like this. This sounds normal. [FEMALE VOICE 2] She smiled and the teeth gleamed in her beautifully modeled olive face. But we can also resynthesize it so as to make it inharmonic. And if you look at the spectrium here, you can see that the harmonic spacing is no longer regular. All right, so we've just added some jitter to the frequencies of the harmonics. And it makes it sound weird. [FEMALE VOICE 2] She smiled and the teeth gleamed in her beautifully modeled olive face. But it's still perfectly intelligible, right. And that's because the vocal tract filtering that I think Hynek is probably going to tell you about this afternoon remains unchanged. And so the notion here is that if you're actually using this harmonic structure to kind of tell you what parts of the sound signal belong together-- well, and if you've got a mixture of two speakers that were in harmonic, you might think that it would be harder to understand what was being said. So he gave people this task where we played them words, either one word at a time, or two concurrent words. And we just asked them to type in what they heard. And then we just score how much they got correct. And we did this with a bunch of different conditions where we increased the jitters. So there's harmonic-- [MALE VOICE 3] Finally, he asked, do you object to petting? I don't know why my Rh has this example. But, whatever-- it's taken from a corpus called TIMIT that has a lot of weird sentences. [MALE VOICE 3] Finally, he asked, do you object to petting? Finally, he asked, do you object to petting? Finally, he asked, do you object to petting? All right, so it kind of gets stranger and stranger sounding than a bottom's out. These are ratings of how weird it sounds. And these are the results of the recognition experiment. And so what's being plotted is the mean number of correct words as a function of the deviation from harmonicity. So 0 here is perfectly harmonic, and this is increasing jitter. And so the interesting thing is that there's no effect on the recognition of single words, which is below ceiling, because these are single words that are excised from sentences. And so they are actually not that easy to understand. But when you give people pairs of words, you see that they get worse at recognizing what was said. And then the effect kind of bottoms out. So this is consistent with the notion that your brain is actually relying, in part, on the harmonic structure of the speech in order to pull, say, two concurrent speakers apart. And the other thing to note here, though, is that the effect is actually pretty modest, right. So you're going from, I don't know, this is like, 0.65 words correct on a trial down to 0.5. So it's like a 20% reduction. And the mistuning thing also works with speech. This is kind of cool. So here we've just taken a single harmonic and mistuned it. And if you listen to that, I think this is this-- you'll basically-- you'll hear the spoken utterance. And then it will sound like there's some whistling sound on top of it. Because that's what the individual harmonic sounds like on its own. [FEMALE VOICE 3] Academic Act 2 guarantees your diploma. So you might have been able to hear-- I think this is the-- [WHISTLING SOUND] That's a little quiet. But if you listen again. [FEMALE VOICE 3] Academic Act 2 guarantees your diploma. Yeah, so there's this little other thing kind of hiding there in the background. But it's kind of hard to hear. And that's probably because it's particularly in speech there's all these other factors that are telling you that thing is speech and that belongs together. And, all right, let me just wrap up here. So there's a bunch of other demos of this character that I could kind of give you about-- I could tell you about. Another thing that actually matters is repetition. So if there's something that repeats in the signal, your brain is very strongly biased to actually segregate that from the background. So this is a demonstration of that in action. So what I'm going to be presenting you with is a sequence of mixtures of sounds that will vary in how many there are. And then at the end, you're actually going to hear the target sound. So if I just give you one-- [WHOPPING SOUND] All right, it doesn't sound-- the sound at the end doesn't sound like what you heard in the first thing. But, here, you can probably start to hear something. [WHOPPING SOUND] And with here, you'll hear more. [WHOPPING SOUND] And with here, it's pretty easy. [WHOPPING SOUND] All right, so each time you're getting one of these mixtures-- and if you just get a single mixture, you can't hear anything, right. But just by virtue of the fact that there is this latent repeating structure in there. Your brain is actually able to tell that there's a consistent source and segregates that from the background. I started off by telling you that the only way that you can actually solve this problem is by incorporating your knowledge of the statistical structure of the world. And, yet, so far the way that the field has really moved has been to basically just use intuitions. And so people would look at spectrograms and they say, oh yeah, there's harmonic structure. There's common onset. And so then you can do an experiment and show that has some effect. But what we'd really like to understand is how these so-called grouping cues relate to natural sound statistics. We'd like to know whether we're optimal given the nature of real world sounds. We'd like to know whether these things are actually learned from experience with sound-- whether you're born with them. The relative importance of these things relative to knowledge of particular sounds like words. And so this-- I really regard this stuff as in its infancy. But I think it's really kind of wide open. And so the sort of take-home messages here are that there are grouping cues that the brain uses to take the sound energy that comes into your ears and assign it to different sources that are presumed to be related to statistical regularities of natural sounds. Some of the ones that we know about are, chiefly, harmonicity and common onset and repetition. I didn't really get to this. But we also know that the brain infers parts of source signals that are masked by other sources, again, using prior assumptions. But we really need a proper theory in this domain, I think, both to be able to predict and explain real world performance. And also, I think, to be able to relate what humans are doing this domain to the machine algorithms that we'd like to able to develop to sort of replicate this sort of competence. And the engineering-- there was sort of a brief period of time where there were some people in engineering that were kind of trying to relate things to biology. But by and large, the fields have sort of diverged. And I think they really need to come back together. And so this is going to be a good place for bright young people to work. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_75_Hynek_Hermansky_Auditory_Perception_in_Speech_Technology_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HYNEK HERMANSKY: So we have this wanted information and unwanted information. Not-- I call-- unwanted information noise and wanted information signal. Not all noises are created equal. There are some noises which are partially understood, and I'm claim this is what we should strip off very quickly, like linear distortions of speaker dependencies. Those are two things I will be talking about. You can easily do it in a future instruction. There are noises which are expected. But efforts may not be well understood. These should go into machine land, but this is what I'm going-- this is something which we-- whatever you don't know, you better have the machine to learn. It's always better to use the dumb machine with a lot of training data than putting in something which you don't know for sure. But what you know for sure, it should go here. And then there is an interesting set of the whole noises, which you don't know that they are-- they even exist. These are the ones which I'm especially interested, because they cause us the biggest problems. Basically, the word you don't know that it exists, noises which somebody introduces-- they've never talked about, and so on, and so on. So I think this is an interesting problem. Hopefully, I will get to it towards the end of the talk-- at least a little bit. So some noise is with known effects. One is like this. When you have a-- you have a speech-- oh. VOICE RECORDING: You are yo-yo. HYNEK HERMANSKY: And you have another speech which looks very different. VOICE RECORDING: You are yo-yo! HYNEK HERMANSKY: But it says the same thing, right? I mean this is a child. This is a adult. And you can tell this was me. This was my daughter when she was 4-- not 30. The problem is that different human beings have different vocal tracts. Especially when it comes to children, vocal tract is much, much shorter. And I was showing you the effects of what happens is that you get a very different set of formants like these dark lines, which a number of people believe we should look at if we want to understand what's being said. We have four formants here, but we have only two formants here. They are in approximately similar positions, but where you had a fourth formant, you have only second formant here. So what we want-- we want some techniques which would work more like a human perception, not look at the spectral envelopes. But mainly you look at the whole clusters. So this is a technique which has been developed a long time ago, but I still mention it because it's an interesting way of going about the things. So it uses several things. One is that it suppresses the signals at low frequencies. You basically use this equal loudness curve. So you emphasize the parts of the signal each child heard well. Second one that uses this is critical bands. Because you say, first step which you want to do is to integrate overcritical band. This is the simplest way of processing within the band-- is you integrate what's happening inside. So what you do, you take your full ear spectrum. This is the spectrum which has equal frequency resolution at all times and a lot of details with this case, because it's a fundamental frequency. And here you integrate over these different frequency bands. They are narrower at low frequencies and getting broader, and broader, and broader, very much how we learned from the experiment with the simultaneous masking. So this is a textbook knowledge. So you get a different spectrum, which is unequally sampled. So, of course, you go back into the equal sampling, but you know that there is fewer samples at the high frequencies, because you are integrating more spectral energy at high frequencies than in low frequencies. And you multiply these outputs through these equal loudness curves. So from the spectrum you get something which has a resolution, which is more like auditory-like. Then you put it through the equal loudness curves, because you know the loudness depends on a cubic root of intensity. So you get a modified spectrum. And then you find some approximation to this spectrum-- auditory spectrum-- saying, that I don't think that all these details still have to be important. I would like to have some control of how much spectral detail I want to keep in. So the whole thing looks like-- let's start with the spectrum. You go through-- a number of steps. And you end up with the spectrum, which is, of course, related to the origin of the spectrum, but it's much simpler. So we eliminated information about fundamental frequency. We merged number of foramens and so on, and so on. So we follow our philosophy. Leave out the stuff which you think may not be important. You don't know how much stuff you should leave out. So if you don't know something, and you are engineer, you run an experiment. You know, research is what I'm doing when I don't know what I'm doing-- supposingly. I don't know where now-- from Brown or somebody was saying that. So we didn't know what's-- how much smoothing we should do if we want to do our-- speaker independent representation. So we ran an experiment for a number of smoothing, a number of complex poles telling you how much smoothing you get through auto-regressive model. And there was a very distinct peak in the situations where we had a training coming for-- templates coming from one speaker and the test coming from another speaker. Then we used this kind of representation in a speech recognition-- I mean in a-- to derive the features from the speech. Suddenly, these two pictures start looking much more similar, because what this technique is doing is basically interpreting the spectrum in a way the hearing might be doing. It has much lower resolution that normally people would use. It has only two peaks, right? But they say that it was good enough for speech recognition. What was more interesting about-- a little bit of interesting science is they also-- found that difference between production of the adults and the children might be just in the length of the pharynx. This is a back part of the vocal tract; that the children may be producing speech in such a way that they are putting already the-- [AUDIO OUT] constriction into right position against the palate. And because they know-- or they-- well, whatever-- mother nature taught them that pharynx will grow in the lifetime. But the front part of the vocal tract is going to be similar. So it is the front cavity which is speaker independent, and it is the back cavity, the rest of the vocal tract, which may be introducing speaker dependencies. It's quite possible if you will-- if you ask people how they've been treated-- like actors-- on how they are being trained to generate the different voices, they are being trained to modify back part of the-- vocal tract. Normally, we don't know how to do. But there is some circumstantial evidence for this might be at least partially true. So what is nice is that when we synthesize the speech and we made sure that front cavity is always in the same place, even when the foramens were in different positions, we were getting very similar results. So we have this theory. The message is encoded in the shape of the front cavity. Through speaker-dependent vocal tracts, you generate the speech spectrum with all the formants. But then there comes the speech perception fine point, which extracts what is called perceptual second formant. Don't worry about that. Basic-- [AUDIO OUT] on the-- at most, two peaks from the spectrum. And this is being used for decoding of the signal, speaker independently. However, I told you one thing, which is don't use the textbook data and be the exact-- [AUDIO OUT] And so I was challenged by my friend, late Professor Fred Jalinek. He is claimed to say, airplanes don't flap wings, so why should we be putting the knowledge of the hearing in-- actually, he said something quite different. This is what The New York Times quoted after he passed away, because that was one of-- supposedly one of his famous quotes. No, he said something else. Well, if airplanes do not flap wings, but they have a wings nevertheless. They use some knowledge from the nature in order to get the job done. The flapping of the wings is not important. Having the wings is important if you want to create the machine which is heavier than the air and flies. So we should try to include everything what we know about human perception, and production, and so on. However, we need to estimate the parameters from the data, because don't trust the textbooks and that sort of thing. You have to derive in such a way that is relevant to your task. What I wanted to say-- you can use the data to derive the similar knowledge. And I want to show it to you. What you can do is to use a technique again, known from the '30s, called Linear Discriminant Analysis. This is the statistician's friend. For this you need a within-class covariance and between class covariance matrix. You need the labeled data. And you need to make some assumptions, which turns out are not very critical. But I mean, they are approximately satisfied when you are working with the spectra. So what we did was we would take this spectrogram, and we would generate the spectral vectors from that. So we would always cut part of the spectrum, or short term the spectrum, and we assign it to the label from which part of speech it came from. So this one would have a label, "yo," right? And so you get the big box full of vectors. All of them are labeled. So you can do LDA. And you can look what they are-- discriminants are telling you. From LDA, you get the discriminant matrix and each of the row of the-- column of the-- column or row-- whatever-- creates the basis on which you should project the whole spectrum, right? These are the four obvious ones here. You also have the amount of variability present in this discriminant matrix which you started with. What you observe, which is very interesting, is that these bases tend to project the spectrum at the beginning, with more detail, than the spectrum at the end. So essentially, they tend-- they appear-- in the first-- group they appear to be emulating properties of human hearing, with some of the well known properties of human hearing, namely, non-equal spectral resolution being verified in any ways-- many, many ways. Among them was one-- I was showing you this masking experiment of Harvey Fletcher. There is a number of reasons to believe that this is a good thing. This is what you see. So essentially, if you look at the zero crossings of these bases-- this is the first base-- they are getting broader and broader. So you are integrating more and more spectrum, right? This is all right-- so that I leave it. Oh, this is from another experiment with a-- very large database, very much and very similar thing. Eigenvalues quickly decay. And what is interesting-- you can actually formally ask, "What is your resolution?" by doing what is called perturbation analysis. So you take some-- say, some signal here. This is a Gaussian. And you project this on this LDA basis. Then you perturb it. You move it. And you ask, how much effect this movement of this, say, emulated spectral element of the speech causes the output-- seen as the output of this projection of this many-- on these many bases? And what do you see is, as I was suggesting, spectral sensitivity to the movements of the formant is much higher at the beginning of the spectrum and much less at the end of the spectrum. You can actually compare it to what we had-- initially in the PLP analysis when we integrated the spectrum based on the knowledge coming from the textbook. And it's very much the same if there were just a plain cosine basis computing mel cepstrum sensitivity is the same at all frequencies. But these bases-- are from the LDA, they're very much doing the thing which physical bin analysis would be doing. And so this is a-- you can look at it. It was a PhD thesis from Oregon Graduate Institute by Naren Malayath, who is now a big-- you better be friends with him. He's at Qualcomm. I think he's a head of Image Processing department. [COUGHS] We better-- better be good friends with him. [INAUDIBLE] [LAUGHTER] OK. Another problem with linear distortions-- linear distortions void was a problem. Now it's not problem anymore, but in the old days, this was a problem. A problem shows up in a rather dramatic way, in following way. Here we have one sound. VOICE RECORDING: Beat. HYNEK HERMANSKY: Beat. So "buh ee- tuh." Here is the very distinct E. Every phonetician would agree. This is E-- high foramen, cluster of high of foramens, and so on. Some vicious person, namely one of my graduate students, took this spectral envelope, designed a filter, which is exactly in reverse of that, and put this speech through this inverse filter so it looked like this. This was-- there is a spectrum where there were nine formants. It's entirely flat. And if you listened to it, you've probably already guessed what you will hear. VOICE RECORDING: Beat. HYNEK HERMANSKY: You'll hear the first speech, right? But you-- VOICE RECORDING: Beat. HYNEK HERMANSKY: It's OK when this-- oops! That's what you would-- sorry. VOICE RECORDING: Beat. Beat. Beat. HYNEK HERMANSKY: But whoever doesn't hear E-- don't spoil my talk. I mean, I think everybody has to hear E, even though any phonetician would get very upset. Because they say this is not E. Because, of course, what is happening, human perception is taking relative-- percept relative to these neighboring sounds, right? And since we filtered everything with the same filter, I mean, relative percepts is still the same. So this is something which we needed to put into our machine. And we did-- actually, this is a very straight-- signal processing wise, the things are very straightforward, because what you have, the signal is actually signal of the speech convolved with the spectrum of the impulse response of the environment. So in logarithmic domain this is the signal processing stuff. Basically, you have a logarithmic spectrum of the signal plus logarithmic spectrum of the environment, which is fixed. So-- what we're finding here is that if you remove somehow this environment, or if you make it invariant to this environment thing, then you maybe win-- you may be winning. The problem here is that each frequency you have a different amount of additive constant, because this is a spectrum, right? If it was just a constant at all frequencies, you just subtract it. But in this case, you can use the trick. You remember what Josh told us this morning. Hearing is doing spectral analysis. And what I was trying to tell you, that each-- at each frequency, each critical band is trajectory of the spectral energy is independent of-- in the first approximation, of the other-- others. You can do independent processing at each frequency band-- and maybe don't screw up that many things. So this was a step which we took. We said, OK. We will treat each sample trajectory differently, right? But we will filter out stuff which is not changing. So related to different frequency channel, do the independent processing channel. And processing was that we would take first logarithm, and then we would do-- then we put each trajectory through a bandpass filter, which would-- main thing is which would be suppressing DC, and slowly changing components. Mainly it was suppressing anything which was slower than one hertz. And also, it turned out it was useful to separate things which are higher than about 15 hertz. So what you get out-- this was the origin spectrogram. This was the modified spectrogram. It seems it got a little bit-- this trajectory got a little bit smoother. Transitions got smoothed out because there was a bandpass filter. There was a high pass elements to that. Very much what we thought-- well, this is interesting. Maybe this is what a human hearing might be doing. To tell you the truth, we didn't know. It was-- for the people who are from MIT and who are working in Image, it was inspired by some work on a perception of lightness, what David Marr called lightness. And here was the type-- thing which I told you about-- 6 by-- 6 by 749. David Marr was talking about processing in space. We applied it in processing in time. But it was still good enough. I mean, so that we definitely got rid of the problem. So here it is. The spectrograms, which look very different, look-- suddenly start looking very similar. I was good seeing what's here. Remember, I'm an engineer. I was working for a telephone company at the time. It was still working better in some problems which we had before. And we had a severely mismatched environment, getting the training data from the labs and testing it in the US west, in Colorado. So it didn't work at all; recognized after this processing everything was cool and dandy. OK. So now we have a RASTA LDA. And we can do the same trick. How about that? So we take-- you take the spectral temporal vectors, and you label each of these vector by the label which is of the phoneme, which is in the center of this trajectory. And just for the sake of-- just to have some fun, we took a rather long vector. It was about one second. And we said, well, what kind of projections would these temporal trajectories would go on if we wanted to get rid of a speaker-dependent-- I mean, of an environment-dependent information? Well, these were the impulse responses. These were the frequency responses. Because in this case, you get a FIR filters. These discriminants are FIR filters, which are to be applied to temporal trajectory-- so spectral energies. Because it's just basically projection of the spectrum on the basis. And basis is one second long. This is impulse response. It cannot be all that long, because eventually, they become zero, right, if you should do nothing to it. You do do nothing. But you can see active part is about a couple of hundred millisecond, maybe a little bit more. And these are the bandpass filters. So essentially-- passing frequency between the 1 hertz and 10, 15 hertz-- very similar at all frequencies. There was another thing which we were very interested in. We should really do different things at different frequencies. Answer is pretty much no. And so that was very exciting. What-- what I-- well, anyway, so let me tell you-- yet another experiment which hasn't happened and is going to be presented next week. Well, we wanted to move in the 21st century, so we did convolutive neural network. And our convolutive network is maybe not what you are used to when you have a 2D convolutions there. But we just said, we will have a 1D filter as a first processing step in this deep neural network. So we postulated the filter, the input, to the neural network. But in this case, we trained the whole thing together. So it wasn't just LDA and that sort of thing. So we forced all filters at all frequencies be the same, because we expected that's what we want to get. And we were asking how these forces look like, which come from the convolutive neural network. Well, again, I wouldn't be showing it if it wasn't really somehow supportive of what I want to say. They don't look all that different for what we were guessing from LDA. They definitely are enhancing the important modulation frequencies around four hertz, right? They are passing a number of them. I'm showing three here, which are somehow arbitrary. They are passing-- and most of them look like that. And we will use the 16 of them. Passing between in 1 and 10 hertz in modulation spectral domain, so changes which are 1 to 10 times a second. It's coming out in a paper, so you can look it up if you want. Last thing which I still wanted to do-- I said, well, maybe it has something to do with the hearing after all. We were are deriving everything from speech. There was no knowledge about hearing, except that we said we think that we should be looking at long segments of the signal, and we expect that this filtering will be very much the same at all frequencies. Actually, not even-- earlier it come out automatically. There wasn't much of the knowledge from human hearing-- [AUDIO OUT] on. In the first one, when I was showing you critical band spectral resolution, we started this full ear spectrum. We didn't tell anything about human hearing. And what comes out is a property of human hearing. I mean, tell me if there is yet another strong evidence that speech is processed in such a way that fits human hearing, because the only thing which was used here was the speech, labor, into certain-- into classes which we are using for recognizing it-- speech sounds. So what we did was-- that was with Nema, with Garani, and my students. We took a number of these cortical receptive fields-- which we talk about it before a little bit-- about 2,000, 3,000, whatever-- we spread-- we basically spread to the floor at the University of Maryland and computed principal components from these fields in both spectral and temporal domain. But here I'm showing the temporal domain. How they look-- they are very much like the rest of filter. This is what is happening. It's a bandpass. Peak is somewhere around four hertz. Essentially, I'm showing you here what I understood might be a transfer function of the auditory cortex derived with all the usual disclaimers, like, this is a linear approximation to the receptive fields. And there might have been problems with collecting it, and so on, and so on. But this is what we are getting as a possible function of auditory cortex. I'm doing fine with the time, right? So you can do experiment in this case. You can actually generate the speech which has certain rates of change eliminated. By doing all this, computing the cepstrum, do the filtering of each trajectory, and reconstruct the speech. And ask people what do they hear? How do they recognize speech? You can also ask, "Do you recognize it?" For this you don't have to regenerate the speech. But you just use therapy C cepstrum. This is the full experiment with the-- this is for-- this is called a residual-excited LPC vocoder. But it's modified in such a way that you artificially slow down or modify the temporal trajectories, which are being-- if there is no filter, you cannot make a replica of the origin, the signal here. So just the bottom line of the experiment here is what-- if you start removing components which are somewhere between 1 and 16 hertz, you are getting hurt significantly. Most you are getting hurt in performance when they are removing component between 2 and 4 hertz. This is a-- that's how you are getting biggest hit. Here we are showing how much is-- how much these bands contribute to recognition and performance by humans. This is a white bars; and to speech recognize it. Those are the black bars. So you can see that in speech recognition, machine recognition, you can safely remove stuff between 0 and 1 hertz. It's not going to hurt you. It's only helps you in this task. Speech perception, there is a little bit of hit, but certainly not as much hit as you are getting when you are moving to the part where you hear the-- [AUDIO OUT] And certainly the component that is higher than 16 or 20 hertz are not important. Then, already, Homer Dudley, he knew, in 1930, when he was designing his vocoder. But it was a nice experiment. It came out in just-- so you can look it up and-- if you want to have a go. Just to summarize what I told you so far, Homer Dudley was telling us information from the-- information about the message in the slow modulation, slow movements of the vocal tract, which modulates the carrier; information about the message in slow modulations of the signal-- slow changes of speech signal in individual frequency bands. Slow modulations imply long impulse responses, right? So 5 hertz, I sense something around 200 millisecond. My magic number of what physically needs to allow, which we have observed in this summation of sub-threshold-- the signals and temporal masking. And so I have to hear a number of things which I listed. Frequency discrimination improved. If you are longer than 200 millisecond, below 200 milliseconds of signal, you don't get such a good frequency discrimination. Loudness increases up to 200 millisecond, then it stays constant. It depends on amplitude. Effect of forward masking-- I was showing you-- asked about 200 millisecond independent of the amplitude of the masker. And sub-threshold integration is also showing you. So I'm suggesting there seem to be some temporal buffer in human hearing on some level. I suspect it's cortical level, which is processing. Whatever happens within this buffer, it's a fair thing to treat as a one element. So you can do filtering on it. You can integrate it; any-- basically, all kinds of thing. If things are happening outside this buffer, these parts should be treated-- [AUDIO OUT] in parts. So how does it help us? You remember the story about the phonemes. You remember that phonemes don't look like this, but they look like this. Length of the coarticulation pattern is about 200 millisecond, perhaps more. So what is a good thing about it is that if you look at sufficiently long segment of this signal, you will get whole coarticulation pattern in. And then you have a chance that your classifier is getting all the information about the speech sound for finding the sound. And then you may have a chance to get a good estimate of the speech sounds. But you need to use these long temporal segments. And here I can say it even to YouTube. I think we should claim the full victory here, because most of the speech recognition systems do it nowadays. They use the long segments of the signal as a first step of the processing. So I can happily retire telling my grandchildren, well, we knew it. We were the only ones. But I mean, we were certainly using it for a long time and probably for a long time, in such a way that we even designed several techniques that you do there. So this is a classifying speech recognition from the temporal patterns directly. So we would take these long segments of the speech through some processing, put neural nets on every-- every temporal structure, trying to estimate the sound at each frequency-- each carrier frequency. And then we would fuse all these decisions from different frequency bands. And then we would use the final vector of posterior probabilities. Unlike what people do very often-- most often-- that they just take the short term spectra, and then they maybe now take the longer segment of this block of these short term spectra. We say, short term spectrum is good for nothing. We just cut it into pieces. And we classify each temporal trajectory individually in the first step. Tell now that it was used-- it may be useful later when I will be telling you about dealing with some kind of noises. But you understand what we did here, right? Instead of using the spectral temporal blocks, we would be using temporal trajectories at each critical event, very much along the lines of what we think that hearing is doing with the speech signal. First thing is, hearing is doing. It takes the signal, sub divides it into individual frequency bands, and then it treats each temporal trajectory coming from each of these cochlear filters to extract the information. And then it tries to figure out what to do with this information later, right? Well, we have another technique called MRASTA, just for people who are interested in cochlear-- I mean, in-- cortical modeling. You take this data. We project a number of projections with variable resolution. So we get a huge vector of the data coming from different parts of the spectrum. And then we feed it into speech recognizers. The first test looked like this. I mean, they have a different temporal resolution, spectral resolution. We are pretty much integrating or differentiating over three critical bands following some of the filters, coming from these three-- I mean, old PLP low order model and three bark three bark critical event integration. So these ones look a bit like what people would call Gabor filters. But they are just put together, basically, from these two places in time and in frequency-- different temporal resolution enhancing different components of moderating the spectrum. Again, you may be claiming that this is something which resembles the Thorston-- [AUDIO OUT] Josh was mentioning in the morning. It's cochlear filter banks-- [CLEARS THROAT] auditory, of course. I mixed up cochlear and cortical-- cortical filter bank, modulation filter banks. So there are some novel aspects in this type of processing I want to impress. It was novel in 1998. That is, as I said, this is fortunately becoming less novel 15 years later. Use is rather long temporal context of the signal as a input. It uses already hierarchical neural nets. So deep neural network processing, which wasn't around in 1998. The only thing was that there was independent processing of frequency of neural net estimator at frequencies. The only thing which we didn't do at the time, and I don't know how important it is. I don't think it doesn't hurt-- it hurts anybody. They were training these parts of the system, this deep neural net individually. And it's just concatenated output. So we never did training all together as we do now in convoluted nets and that sort of thing. Because simply, we didn't even dream about doing that, because we didn't have the hardware. That was one thing which I tried to point out during this panel. A lot of progress in neural nets research and success of neural nets comes from the fact that we have very, very powerful hardware, which we didn't have. So we didn't dream about many things doing, even when they might have made sense. So, OK. Where are we? Oh, I see-- one more thing. Coarticulation. This is a problem which is known since people started looking at the spectrograms. There's some consonants like a "kuh", or "huh." They are very dependent on what's following. So "kuh" in front of "ee, kee, koo, kah, koo-kah," we [AUDIO OUT] has a burst here. In front of "ooh" has a burst here. And in front of "ah" there's a burst here. So the phonemes are very different depending on the environment. When you start using these long temporal segments, you know all the tricks, or some of the tricks, I showed you about, what comes out are the posteriogram in which the "kuh" almost looks the same as a "kuh." It basically recognizes this-- recognizes that since it looks at the whole coarticulation pattern or group of the phonemes, in order to recognize this sound, it does the right thing. So I suspect that success of these long temporal context which people are using now with speech recognition, comes from the fact that this partially compensates for the problems with-- by problems for the coarticulation. And what I also want is to say-- coarticulation is not really a problem. It just spreads the information for a long period of the time. If you know how to suck it out, it can be useful. But it's a terrible thing if you start just looking at individual frequency events, even with your frequency's slices of the short term spectrum. So it's another deep net-- deep net from, I don't know the name. Sorry. It was already almost legal deep net. You do the-- you estimate the posteriorgram from the short window in the first step for about 40 mm length window. And then you take the long-- I mean, big window of the posteriors' tree. Another neural net you get much better, which also work better. Again, the mainstream technique nowadays is being used in most of the DARPA systems. Oh, yes-- one more thing. I want to stress this one. [LAUGHS] So, I'm sorry I didn't want to show it all at the same. But anyways, I don't think that there's anything which is terribly special about short term spectrum of speech. I think what really matters is how you process the temporal trajectories of the spectra energies. This is what the human hearing is doing that seems to be doing a good job on our speech recognizers. So essentially, this is one message which I want to say. Don't be afraid to treat different parts of the spectrum different. Individually you may get some advantages from them. It started with your stub, but it shows up over and over again. So away from the short term spectrum, go away, they start using what hearing is doing-- start using a temporal trajectories of the spectra energies coming from your analysis. To the point that we did this work on real [INAUDIBLE],, on the-- going directly, don't do this. And don't get your time frequency patterns from the short term spectra. I think about always how to get directly what you want. It turns out that there is a nice way of doing-- for estimating directed hilbert envelopes of the signal in the frequency bands called frequency domain linear prediction. [STATIC] Mario says-- there's his PhD thesis. And we were working together for a couple of years. So what you do, instead of using the time trajectory, so use this case, autoregressive modeling-- LPC modeling-- and put the windows on a time to get the frequency-- frequency vectors. You do it on a cosine transform of the signal. So you move the signal into a frequency domain. And then you put the windows on this cosine transform of the signal. And you derive directly the-- all polar approximations to hilbert envelopes of the signal in the sub bands. You don't ever do the hilbert transform. You just use the usual techniques from autoaggressive modeling. The only difference is-- [AUDIO OUT] on the cosine transform of the signal. And your windowing determines which frequency range you are looking. So, of course, what you typically do, you can use the longer windows at higher frequencies, shorter window of lower frequencies. You do all these things. But this is a convenient way. [COUGHS] It's convenient, and this is more and more like fun. But maybe somebody might be interested in that. So essentially, what you do-- oops. Sorry. What you do is that you take the signal, and you eliminate modulation component out of that AM component which the signal is being modulated. So this carries the information about the message. And this is the carrier itself. And you can do what is called channel vocoder, which we did. And you can listen to the signal. So this is-- in some ways it's interesting-- original signal. VOICE RECORDING: They are both trend-following methods. HYNEK HERMANKSY: Oops. I tried to make it somehow-- [AUDIO OUT] VOICE RECORDING: They are both trend-following methods. HYNEK HERMANKSY: Somebody may recognize Jim Glass from MIT in that. VOICE RECORDING: In an ideological argument, the participants tend to dump the table. HYNEK HERMANKSY: So this is silly, right? Now you can look at what you get if you just keep the modulations and excite you know, with the white noise. Oops. Sorry. Oops! What am I doing? Oh, here. VOICE RECORDING: (WHISPERING) They are both trend-following methods. HYNEK HERMANSKY: Do you recognize Jim Glass? I can. VOICE RECORDING: (WHISPERING) In an ideological argument the participants tend to dump the table. HYNEK HERMANSKY: And then you can also listen to what is left after you eliminate the message. VOICE RECORDING: Mm-hmm. Ha, ha. [LAUGHTER] HYNEK HERMANSKY: Maybe it's a male, right? VOICE RECORDING: Mm-mm [VOCALIZING] HYNEK HERMANSKY: Oh, this is fun. This is [CHUCKLES] fun. It may have some implication for speech recognition. But certainly, if I have seen one verification of what old Homer Dudley was telling us-- where the message is-- I mean, this is it. All right? Anyways, what is good in-- for that, is that once you get an open-- [AUDIO OUT] it's relatively very easy to compensate for your ear distortions. Because main effect of linear distortions is basically shifting the energy a different frequency by-- bends by different amounts. But all this information is in the gain of the model. It's one parameter which you essentially ignore after you do this frequency domain linear prediction. And you get a very similar trajectory for both. This is a telephone speech and clean speech which differed quite a bit. And I hope that I have-- oh, this is for reverberant speech. There seem to be also some advantage, because reverberation is in the first information, it is a convolution with the impulse response of the room. So you make the-- if you use a truly long segments-- in this case, we used about 10 seconds of the signal approximating by this open model, and eliminated the DC from that. You know, it seems to be getting some advanced-- [AUDIO OUT] So known noise with unknown effects. I say train the machine on that. Here is the one example, right? You have a phoneme error rates, noise estimate. If everything is good, clean, trading clean test, you have about 20% phoneme accuracy. This is a stage of the result-- reasonable result. But once you start adding a noise, things quickly go south. Typical way of dealing with it is if you train multi-style. So if you know which choices you are going to deal with, you train on them. And things get better, but you pay some price. I mean, certainly, you pay the price on clean, because you recognize your model became must mushier, basically. It's not a very sharp model. So here we had a wonderful 21%. We paid 10% relative price for getting this better performance on the noises. What we observe is that you get much better results, most noticeably better results, if you would have different recognizers for each type of noise. But of course, the problem is that you have different types of noise. So you have this number of recognizers. But now you need to pick up the best stream. And how do you do that? This is something, again, which I was mentioning also earlier. This is something which we are struggling with and we don't know how to do that. If you are a human being, maybe you can just look at the output. And you can see, just keep switching after your message starts looking reasonably well. But if you want to do it fully automatically, I don't know why we want to only build a fully automatic recognizers, but that's what we are doing. So you want to pick up the-- you want a system to pick up the best stream. So how do we do that? First thing is, of course-- one way is to recognize type of noise. This is a typical system nowadays. You recognize type of the noise, and you use the appropriate recognizers. BBN is doing it. My feeling is that it's somehow cleaner and more elegant to be able to figure out what is the right output, because what-- neither. It's not what is the signal, but what is the signal interacting with the classifier? So for this we have to figure out what the best means. So here we have two posteriograms. If you look at it, if you know that these are trajectories of the posteriors-- of the speech sounds, you know this one is good. This one is not so good. Because the word is nine-- "ne-ine," "ne," right? Here is a lot of garbage. So I will know that-- I will do it automatically. So ideally, I would pick up the stream which gives me the lowest error. But I don't know what the lowest error is, because I don't know what the correct answer is. That's the problem, right? So one is to maybe try to see what I-- what my I did? Try to figure out which posteriogram is the cleanest. Another one is following thinking. When I trained the-- I trained the neural net on something. It's going to work well on the data on which it was trained. So I have some gold standard output. And then I will try to see how much my output differs if the test data, which are not the same as the data on which the recognizer was trained. So both of these tricks we were using. So first one uses a technique which is like this. You look at the differences between posteriors or KL divergence areas a certain distance from each other, and you will slice this window. You cumulatively cover as much data as you possibly can. And what you observe is that if you have a good, clean data, this cumulative divergence keeps increasing. And after you cross the point where there is a coarticulation pattern or coarticulation ceases, suddenly you start getting pretty much the fixed high tail divergence-- cumulative KL divergence. If we have a noisy data, the noise start dominating this KL divergences and differences RS. Because the signal in the first place carries the information, and the information is in the changes. But noise is creating the information which it doesn't have these segments, or something. So this is one technique which we use. Another technique which is even more, now, popular, in at least in my lab, training of another unit. We trained this autoencoder on the output of a classifier. And we say-- so we-- on the output of the classifier as it's being used on the training data, and we say autoencoder. Then we learn how in average the output from the classifier used on its training data looks like. And if-- and then we use it on the output from the classifier used to unknown data. And if the autoencoders then knew, then we tried to predict input on its output. This is how it is being trained. So if the output-- if the prediction is not very good, then we say we are probably dealing with the data for which the classifier is not good. It's how it works. I mean, you know, it's honest. If you are looking at the output of the neural net which has been applied to a-- towards training data, the test is-- or the test is maybe-- the training data is matched. Pretty much the error is very similar as it goes on the training data. When you apply to a data for which the classifier wasn't trained, your error is, of course, much larger. Prediction is prediction of the output of the-- so there's a double-- there is a double deep net. One is classifying, and then another one is predicting its output. One-- and the one which predicts output is trained to predict its best output it can possibly have, which is the output on the training data of the previous classifier. I don't know if you follow me or if it is becoming too complicated. [CHUCKLES] But essentially, we are trying to figure out if anything like it is looking on a training-- and applied on the training data. So it seems to be working to some extent. Here we have a multi-style results. Here we have a matched result. This is what we would like to achieve. Of course this is oracle. This is what we-- it will be ideal if we knew which stream is best. This is what we would be getting. But what we are getting is not that terribly bad. I mean, certainly, it is typically better than a multi-style training. All right. And we have some ways to go to oracle like-- not too far from the matched trace. Sometimes it's even there, because it's making a decision on every utterance. So sometimes it can-- going to do quite well. So we were capable of picking up the good streams and leaving out the best bad streams, even at the output of the classifier. How does it work on previously unseen noise? Fortunately, for this example, we still seem to be getting some advantage. We are using noise which has never been seen by the classifiers. But still, it was capable of picking up the good classifier, which is actually better than any of these classifiers. Sort of a, so this seems to be good. Another technique of dealing with unseen noises is actually one which I like maybe even a bit more, which is you do the pre-processing, and in some processing in frequency bands, hoping the main effect of the different noises is in the shape, spectral shape, of different noise. So if you are doing recommission in the sub bands, then the noise start looking-- in each sub band starts looking more like a white noise except of different levels. So meaning, it's here, that maybe the signal to noise ratio is higher. Here it is more miserable. But if I have the classifier, which is strained on multiple levels of the white noise, in each frequency band perhaps I can get some advantage. So I do what in the cochlear might be doing, which is I divide each-- the signal into a number of frequency bands, and then I have one fusion DNN which will try to put these things together. But each of these nets are going to be trained on multiple levels, but this time of the white noise. And it's going to get back to noises which are not white, or the noise which is not white. So how it works, you can figure it out to see if it was done in the case of Aurora. So here we have the examples here-- how it works on a-- in matched situations. Here is multi-style training. But here it is what you are getting if you apply this technique. This is what you get now, multi-style training. But in the sub band recommendation, you are getting 1/2 of the error rate. Just a simple trick which I think is reasonable, which is you do the sub band recognition. There's a number of power recognized, each of them paying attention to part of the spectrum. And then you-- each of them is being trained to head the white noise, a simple white noise. But you turn, in some ways, arbitrary additive noise, this car noise, into white-like noise in each sub band. And that what's you get. So in general, dealing with unexpected noise is-- you want to do the adaptation. You want to modify your classifier on the fly. You want to have parts of the classifier or some streams which are doing well. And some of the parts-- so parts of the classifier are still reliable. And you want to pick up these streams which are reliable on unseen situation. So this is what we call this multi-stream recognition-- adapt to multi-stream adaptation to unknown noise. So you assume that not all the streams are going to give you good result, but you assume that at least some of the streams are going to have good results. And all these streams are being trained on, say, clean speech or something. So this is this multi-band processing, all right? So this is what we do. We do different frequency ranges. And then we use our performance monitor to pick up the best stream. So here is the experiment which we did. So you would have the 31 processing streams created from all combinations of fine frequencies. And one stream was looking at full spectrum. And the other things were only looking at the parts of the spectrum. So more black ones, there is a more spectrum; more white, some of them are looking only at a single frequency band. So we have a decent number of processing channels, and we would hope that if the noise comes here, maybe this one is going to be good, because this one is not going to be-- not worth picking up this noise, or recognize that basically which only uses the bands which are not noisy. It's going to be good. So this is the whole system. It was published in 230-- [STATIC] like a entire speech. We had this sub band recognition, fuse, and performance monitor, and the selecting of a stream. This is how it works. This is again, showing for car noise. Car noise is very nice, because it mainly, it corrupts the low frequencies. So all these sub band techniques work quite well. But you can see that it's pretty impressive, and it's-- if you didn't do anything, you get 50% error. With this one you get 38% error. If you knew which bands to pickup you would be getting oracle experiment-- cheating experiment. You would be getting about 35%. So that was, I thought, quite nice. Just to conclude-- so, auditory system doesn't only look like this, that it starts with a signal as the analysis, and then it reduces the-- reduces the bit rate; but it also increases number of views of the signal. And this is based on the fact that there is a massive increase in number of the cortical neurons on the level of the cortex. So there is many ways of describing information at high level of perception. Essentially, the signal doesn't go through one pass, but it goes through many, many passes. And then we need to have some means, or we have some means to pick up the good ones and ignore the other ones-- maybe switch them entirely off. This is like vision. So this is the path of processing, in general, signal. You get the different probability estimates for different streams. And then you need to do some fusion and decide on-- based on the level of the fusion. How you can create the streams-- but we were showing you differently trained probability estimates on different noises-- different aspects of signal, that is, different parts of the spectrum of the signal. But you can go wild. You can start thinking about different modalities, because there-- as we talked about it also in the panel, you know, very often audiovisual model, if it carries the same information about the same things, [STATIC] infusion of audio visual streams. You can also imagine the fusion from streams with different levels of priors, different levels of hallucinations. So basically, this is what I see human beings are doing very often. If the signal is very noisy, you are at a cocktail party, you are guessing, because that's the best way to get through if the communication is not very important. It's not about your salary increase, but about the weather-- so basically, guessing what the other people are saying, especially if they speak the way I do, right, with a strong accent or something. So the priors are very important. And streams with priors are very important. We use this to some extent, I was mentioning, by comparing the streams of different-- with different prior to discover if the signal is biased in the wrong way by priors. So stream formation-- there is a number of PhD theses right here, right? I think. Fusion-- oh, why? It's select the best probability estimates. I tell you. This is the problem which I was actually asking for, please help me to solve it. Because we still don't know how to do that. I have a-- I suspect that especially in human communications, people are doing it like this, which starts making a sense if they use the certain processing strategy. So people can tell if the output of our-- this perceptual system makes sense or not. Our machines don't know how to do it yet. Conclusion. So some problems with the noise are simple. You know, you can deal with it on a signal processing level by filtering the spectrum, filtering the trajectories, because these effects are very predictable. And if you understand them, you should do it. Because there's no need to train on that. They said there is-- you just do it. And things may be working well. Unpredictable effects of noise can be-- typically are being handled by now by multi-style training. And these amounts of training are enormous nowadays. You know, if you talk to Google people, they say we are not deeply interested in this-- what you are doing, because we can always collect more data from new environments. But I think it's not-- I shouldn't say dishonest. I'm sorry. Scratch it. [LAUGHTER] It's not the best engineering way of dealing with these things, because I think the good engineering way of dealing with those things is to get away with less training and that sort of thing; and maybe follow what I believe that human beings are doing. So we have a lot of parallel experts working with the different aspects of the signal, giving us different pictures. And then we need to pick up the good ways of being. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_92_Haim_Sompolinksy_Sensory_Representations_in_Deep_Networks.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HAIM SOMPOLINSKY: My topic today is discussing sensory representations in deep cortex-like architectures. I should say the topic is perhaps toward a theory of sensory representations in deep networks. As you will see, our attempt is to develop a systematic theoretical understanding of the capacity and limitations of architectures of that type. The general context is well known. In many sensory systems, we see information is propagating from the periphery, like the retina, to primary visual cortex, and then, of course, many stages up to a very high level, or maybe the hippocampal structure. It's not purely feedforward. There are backward, massive backward or top-down connections, the recurrent connections, and some of those extra features I'll talk about. But the most intuitive feature of that is simply a transformation or filtering of data across multiple stages. Similarly in auditory pathway. In other systems, we see a similar structure or aspect of similar structure as well. A well-known and a classical formative system for computational science is cerebellum, where you have information coming from the mossy fiber layer, then expand enormously into a granule layer, and then converge to a Purkinje cell. So if you look at a single Purkinje cell, the output of the cerebellum, as a unit, then you see there is, first, an expansion from 1,000 to two orders of magnitude higher in the granule layer. And then convergence of 200,000 or so parallel fibers onto a single Purkinje cell. And there are, those type of modules, many, many across the cerebellum. So, again, a transformation which involves, in this case, expansion and then convergence. In the basal ganglia, which I wouldn't categorize it as a sensory system. More related to motor. Nevertheless, you see cortex converging first to various stages of the basal ganglia, and then expanding again to cortex. Hippocampus has also multiple pathways, but some of them include a convergence. For instance, convergence to a CA3, and then expansion again to cortex. But there are other multiple pathways as well, different stages of sensory information propagating, of course, across them. And, finally, the artificial network story of deep neural networks, all of you may have heard. Input layer, then sequence of stages. Purely feedforward. And at least the canonical leading network is one that the output layer has object recognition, object classification task. And the whole network is studied by backprop, supervised learning for that. What I'll talk about is more in the spirit of the idea that the first stages are more general purpose than the specific classic task in the output layer. So there are many issues. Number of stages that are required, the size of them, why compression or expansion. In many systems, you'll see that the fraction of active neurons is small in the expanded layer. That's what we call sparseness. So high sparseness means small number of neurons active at any given stimulus. It's just the terminology is somewhat confusing. So high sparseness is small number of active neurons. One important and crucial question is how to transform. What are the filters, the weights that are good for transforming sensory information from one layer to another? And, in particular, whether random weights is good enough, or maybe even optimal in some sense. Or one needs more structure, the more learned type of synaptic way. This is a crucial question, perhaps not for machine learning but for computational neuroscience, because there is some experimental evidence, for at least of some of the systems that are studied, that the mapping from the compressed, original representation to the sparse representation is actually done by randomly connected weights. So one example is olfactory cortex. The mapping of olfactory representation from the olfactory bulb, so from glomerulus layer, to the piriform cortex seems to be random, as far as one can say. Similarly, in the cerebellum, the example that I mentioned before, when one looks at the mapping from the mossy fiber to the granule cell, again enormous expansion by a few orders of magnitude. Nevertheless, they seem to be random. Now, of course, one cannot say exclusively that they are random and there are no subtle correlations or structures. But, nevertheless, there is a strong motivation to ask whether random projections are good enough. And if not, what does it mean structured? What kind of structure is appropriate for this task? Question of top-down and feedback loops, recurrent connections, and so on. So that's all I hope to, at least briefly, mention later on in my talk. So before I continue, most of, or a large part of the talk will be based on published and unpublished work with Baktash Babadi, who was, until recently, a postdoctoral, a Swartz Fellow at Harvard University. Went to practice medicine. Elia Frankin, a master student at the Hebrew University. SueYeon, who all you know here, at Harvard. Uri Cohen, a PhD student at the Hebrew University. And Dan Lee from Penn University. So here is our formalization of the problem. We have an input layer, denoted 0. Typically, it's a small, it's a compressed layer with dense representation. So here, every input will generate maybe half of the population, on average, Then there is a feedforward layer of synaptic weights, which expand to a higher-dimension layer, which we call cortical layer. It's expanded in terms of the number of neurons. So this will be S1. It is sparse because the f, the fraction of neurons that are active for each given input vector, will be small. So it is expanded and sparse. That will be the first part of my talk, discussing this. Then, later on, I'll talk about staging, cascading this transformation to several stages. And ultimately there is a readout, will be some classification task. 1 will be one classification. 2 will be another classification rule, et cetera, each one of them with synaptic weights which are learned to perform that task. So we call that a supervised layer, and that's the unsupervised layer. So that's a formalization of the problem. And, as you will see, we'll make enormously simplifying abstraction of the real biological system in order to try to gain some insight about the computational capacity of such systems. So the first important question is what is the statistics, the statistical structure of the input? So the input is kind of n-dimensional vector, where n, or n0, when n is the number of units here. So each sensory event evokes a pattern of activity here. But what is the structure, the statistical structure, that we are working with? And the simplest one that we are going to discuss is the following. So we assume, basically, that the inputs are coming from a mixture of, so to speak, a mixture of Gaussian statistics. It's not going to be Gaussian because, for simplicity, we'll assume they are binary. But this doesn't matter actually. So imagine that this is kind of a graphical represent-- a caricature of high-dimensional space. And imagine the inputs, the sensory inputs, imagine that they are clustered around templates or cluster centers. So these will be the centers of these balls. And the input itself is coming from the neighborhoods of those templates. So each input will be one point in this space, and it will be originating from one of those ensembles of one of those states. So that's a simple architecture. And we are going-- in real space, it will be mapping from one of those states into another state in the next layer. And then, finally, the task will be to take-- imagine that some of those balls are classified as plus. Let's say the olfactory factory stimuli, and some of them are classified as appetitive, some of them as aversive. So the output layer, the readout unit, has to classify some of those spheres as a plus, and some of them are minus. And, of course, depending on how many of them are in a dimensionality, and their location, this may or may not be an easy problem. So, for instance, here it's fine. There is-- a linear classifier on the input space can do it. Here, I think there are, there should be some mistakes here. Yeah, here. So here is a case where the linear classifier at the input layer cannot do it. And that's our-- that's a theme which is very popular, both in computation neural science and system neural science studies in machine learning. And the following question comes up. Suppose we see that there is a transformation of data from, let's say, a photoreceptor layer in vision to the ganglion cells at the output of the retina, then to cortex in several stages. How do we gauge, how do we assess what is the advantage for the brain to transform information from one, let's say, from retina to V1, and so on and so forth. After all, in this feedforward's architecture, no net information is generated at the next layer. So if no net information is generated, the question is, what did we gain by these transformations? And one possible answer is that it is reformatted, reformatting the sensory representation into different representation which will make subsequent computations simpler. So what does it mean, subsequent computation is simpler? One notion of simplicity is whether subsequent computation can be realized by a simple linear readout. So that's the strategy that we are going to adopt here. And this is to ask, as the information, as the representation is changing as you go from one layer to another, how well a linear readout will be able to perform the task. So that's the input. That's the story. And then, as I said, there is an input, unsupervised representations, and supervised at the end. I need to introduce notations. Bear with me. This is a computational talk. I cannot just talk about ideas, because the whole thing is to be able to actually come up with a quantitative theory that tests ideas. So let me introduce notations. So at the centers, at each layer, you can ask what is the representation of the centers of these stimuli? And I'll denote the center by a bar. And mu is index of the patterns. So mu goes from 1 to P. P Is the number of those balls, number of those spheres, or number of those clusters, if you think about clustering some sensory data. So P would be the number of clusters. i, from 1 to N, is simply the neuron or the unit activation at each mu. And L is the layer. So 0 is the input layer. It's up to L layer. So this would be 0, 1. The mean activation at each layer from 1 on will just have to be a constant to be f. f goes from 0 to 1. The smaller f is, the sparser the representation is. We will assume that the input representation is dense. So this is 0.5. N, again, we'll assume to be, for simplicity, a constant across layers, except for the first layer, where there is expansion. You can vary those parameters, and actually the theory accommodates variations of those. But that's the simplest architecture. You expend a dense representation into a sparse higher dimension, and you keep doing it as you go along. So that's notation. Now, how do we assess what is the next stages doing to those clusters. So, as I said, one measure is take a linear classifier and see how linear classifier performs. But, actually, you can also look at the statistics of the injected sensory stimuli at each layer and learn something from it. And, basically, I'm going to suggest looking at two major statistical aspects of the data in each layer of the transformation. One of them is noise, and one of them is correlation. So what is noise? So, again, noise will be simply the radius, or measure of the radius of the sphere. So if you had only the templates as inputs, the problem would be simple. Problem would be easy as long as we have enough dimension. You expand it. You can easily do linear classifier and solve the problem. So the problem, in our case, is the fact that the input is actually the infinite number of inputs, or exponentially large number of possible inputs, because they all come from a Gaussian or a binarized version of a Gaussian noise around the templates. And I'll denote the noise by delta. 0 means no noise. The normalization is such that 1 means that they are random. So delta equals to 1 means that, basically, you cannot tell, the input, whether it's coming from here or from any other points in the input space. The other thing, correlations, is more subtle. So I'm going to assume that those balls are coming from kind of uniform distribution. Imagine you take a template here. You draw a ball around it. You take a template. Here, you draw a ball. Everything is kind of uniformly distributed. The only structure is the fact that data comes from this mixture of Gaussians or noisy patterns around those centers. So that's fine. But as you project those clusters into the next stage, I claim that those centers, those templates, get new representation, which can actually have structure in them, simply by the fact that you put all of them into this common synaptic weights into the next layer. And I'm going to measure this by Q. And, basically, low Q or 0 Q is basically a kind of randomly uniformly distributed centers. And I'll always start from that at the input layer. But then there is a danger, or it might happen that, as you propagate this information or this representation through the next layer, the centers will look like that, or the data, structure of the data, looks like that. So, on average, the distance between two centers, on average, is the same as here. But they are clumped together. It's kind of random clustering of the clusters. And that can be induced by the fact that the data is feedforwarded from this representation. That can pose a problem. If there is no noise, then there is, again, no problem. You can differentiate between them, and so on. But if there is noise, this can aggravate the situation, because some of the clusters become dangerously close to each other. And we will come to it. But, anyway, so we have this delta, the noise, the size of the clusters, and we have Q, the correlations, how they are clumped in each representation. And now we can ask how delta evolve when you go from one presentation to another, how Q evolve from one presentation to another, and how linear classifier performance will change from one representation to another. So the simplicity of this assumption allows for a kind of systematic, analytical exploration or study of this. These are definitions. Let's go on. So what will be the ideal situation? So the ideal situation will be that I start from some level of noise, which is my spheres at the input layer. I may or may not start with some correlation. The simplest case would be that I start from randomly distributed centers. So this would be 0. And the best situation will be that, as I propagate the sensory stimuli, delta, the noise, will go to 0. As I said, if the noise goes to 0, you are left with basically points. And those points, if there is enough dimensionality, those points would be easily classifiable. It would also be good, if the noise doesn't go to 0, to have also kind of uniformly spread clusters. So it will be good to keep Q to be small. So let's look at one layer. So let's look at this. We have the input layer, the output layer here, and the readout. The first question is what to choose for this input layer. So the simplest answer would be choose random. So what we do, we just take Gaussian. The Gaussian weights in this layer are very simple. 0 mean, with some normalization. It doesn't matter. Then we project them into each one of these guys here. And then we add threshold to enforce the sparsity that we want. So whatever the activation here is, whatever the input here is, the threshold makes sure that only the f with the largest input will be active, and the rest will be 0. So there is a nonlinearity, which is of course extremely important. If you map one layer to another with a linear transformation, you don't gain anything in terms of classification. So there is a nonlinearity, simply a threshold nonlinearity after an input projection. All right. So how we are going to do this? So it's straightforward to actually compute analytically what will happen to a noise. So imagine you take two input vectors with some Hamming distance apart from each other. You map them by convolving them, so to speak, with Gaussian weights, and then thresholding them to get some sparsity. So f is the sparsity. The smaller the f is, the sparser it is. So this is the noise level, the normalized Gaussian-- I'm sorry-- sphere radius or Hamming distance in the output layer, and also the input layer. Well if 0 at start, then of course you start at the origin. If you are random at the input, you will be random there. So these points are fine. But, as you see, immediately there is an amplification of the noise as you go from the input to the output. So you start from 0.2, but you get actually, after one layer, to 0.6. And, actually, the sparser it is-- so this is a relatively high sparsity, or at least you go from here to here by increasing sparsity, namely f becomes smaller. And as f becomes smaller, this curve is actually steeper and steeper. So not only you amplify noise, but you also-- the amplification becomes worse the sparser the representation is. So that is the kind of negative result. The idea that you can gain by expanding data to a higher dimension and make them more separable later on dates back to David Marr's classical theory of the cerebellum. But what we show here is that, if you think not about clean data, a set of points that you want to separate, but you think about the more realistic case of you have noisy data, or data with high variance, then the situation is very different. So a random expansion actually amplifies those. And that's a theme that will-- actually, we will live with it as we go along. Random expansion is doing the separation of the templates. But the problem is it also separates two nearby points within a cluster. It also separates them. So everything becomes separated from each other. And this is why noise is amplified. Now, what about the most subtle thing, which is the kind of overlap between the centers? So, on average, the centers are as far apart as random things. But if you look, not on average, but you look at the individual pairs, you see that there is an excess correlations or overlap between them. So this is overlap between the centers. Again, on average, it is 0, but the variance is not 0. On average it's like random, but the variance is different, is larger than random. And there is an amplification. There is a generation of this excess overlap, although it's nicely controlled by sparsity. So as sparsity goes down, these correlations go down. So that's not a tremendous problem. The major problem, as I said, is the noise. By the way, you can nicely do an exercise where you generate, you look at this cortical layer representation, and you do SVM, or PCA, and you look at the eigenvalue spectrum. So if you just look at random sparse points, and you look at the SVD this is the eigenvalues number ranked-- then that's what you find. It's the famous Marchenko-Pastur distribution. But, in our case, you see there is an extra power. In this case, the input layer is 100, so the extra power in the input layer, in the first input eigenvalue. Now, why is it so? What Q is telling us, what nonzero Q is telling us is the following. You take a set of random points, and you project them into higher dimensions. You start with 100 dimensions, and you project them in 1,000 dimensions. On average, they are random. But actually-- so you would imagine that it's a perfect thing. You project them with random weights. Then you would imagine that you just created a set of random points in the expanded dimension representation. If this was so, then if you do SVM or PCA on this representation, you will find what you expect from a PCA of a set of random points. And this is this one. In fact, there is a trace of low dimensionality in the data. So I think that's an important point, which I would like to explain. You start from a set of points. If you don't threshold them and you just map them into 1,000-dimensional space, those 100-dimensional input will remain 100-dimensional. Just be rotated, and so on, but everything will live in 100-dimensional space. Now you add thresholding, high thresholding by sparsity. So those 100-dimensional subspace becomes now 1,000-dimensional sparsity because of the nonlinearity. But this nonlinearity, although it takes 100-dimensional input and makes them 1,000-dimensional, it's still not like random. This 1,000-dimensional cloud is still elongated. It's not simply uniformly distributed. And this is the signature that you see here. In the first largest 100 eigenvalues, there is extra power relative to the random. The rest is not 0. So if you look here, this goes up to 1,000. The rest is not 0. So the system is, strictly speaking, 1,000-dimensional space, but it's not random. It has increased power in 100 channels. If you do a readout, a linear classifier readout, what you find in this-- again, when you expand with random weights, you find that there is an optimal sparsity. So this is the readout error. For a classifier, the output is a function of the sparsity for different levels of noise. And you see that, in the case of random weights, there is a very high sparsity, is bad. There Is an optimal sparsity or sparseness, and then there is a shallow increase in the error when you go to a denser representation. One important point which I want to emphasize coming from the analysis-- let me skip equations-- and this is what you see here. The question is, can I do better by further increase the layer? So here I plot the readout error as a function of the size of cortical layer. Can I do better? If I make the kernel dimensionality infinite, can I do better? Well, it can do better if you start with 0 noise. But if you have noisy inputs, then basically, the performance saturates. And that's kind of surprising. We were expecting that, if you go to a larger and larger representation, eventually the error will go to 0. But it doesn't go to 0. And that actually happens even for what we call structured representation. And that's the same for different types of readout-- perceptual, and pseudo-inverse, SVM. All of them show this saturation as you increase the size of cortical layer. And that's one of the very important outcome of our study. That when you talk about noisy inputs, you can think about it as kind of more generalization task. Then there is a limit about what you gain by expanding representation. Even if you expand in a nonlinear fashion and you increase the dimensionality, you cannot combat the noise, at least up to some level. Beyond some level, there is no point of further expansion, because basically the error saturates Let me, since time goes fast, let me talk about the alternatives. So if random weights are not doing so well, what are the alternatives? The alternative is to do some kind of unsupervised learning. Here we are doing it in a kind of a shortcut of unsupervised learning. What is the shortcut? We say the following. Imagine that these layers, the learner knows about the representation of the clusters. It doesn't know the labels. In other words, whether those are pluses and those are minuses, which one are pluses and minuses. But he does know about the statistical structure of the input, and this is this S bar. These are the centers. So we want to encode the statistical structure of these input in this expansion of the weights. And the way we do the simplest way is the kind of Hebb rule. We do the following. We say let's first choose, or recruit, or allocate a state, a sparse state here, randomly chosen, to associate, to represent each one of the clusters. So these are the R. R are the randomly chosen patterns here. And then we associate between those randomly chosen representations and the actual centers of the clusters of the inputs. So this is S bar and R. And then we do the association by the simple, what's called Hebb rule. So this Hebbian rule associates cluster center with a randomly assigned state in the cortical layer in a kind of simple summation or outer product for the Hebb rule. There are more sophisticated ways to do it, but that's the simplest one of doing it. So it turns out that this simple rule has enormous potential for suppressing noise. So, again, this is the input noise and the output noise. The Hamming distance of the input and the output properly normalized. And you see that, as you go to higher and higher sparseness, to lower and lower f, this is basically the input noise is completely quenched when f is large. When f is 0.01, for instance, this is this already. Sub-linear when f 0.05 is here, and so and so forth. So sparse representation, in particular, are very effective in suppressing noise, but provided the inputs have kind of unsupervised learning encoded into them which embed into them the cluster structure of the inputs. The same or similar thing is true for Q for these correlations. If you look at the-- this was a random correlation. This is a function of f, and this is Q, the correlation. It's extremely suppressed for sparse representation. Basically, it's exponentially small with 1/f, so it's basically 0 for sparse representation. Which means that those centers look like randomly distributed, essentially, and with very small noise. So you took these spheres and you basically map them into random points with a very small radius. So it's not surprising that, in this case, the error for small f-- the error, even for large noise values, the error is basically small, 0. Nevertheless, it is still saturating as a function of the network size, of the cortical size. So the saturation of performance as a function of cortical size is a general property of such systems. Nevertheless, the performance itself for any given size is extremely impressive, I would say, when the system is sparse and the noise level is kind of moderate. OK, let me skip this because I don't have time. Let me briefly talk about extension of this story to multi-layer. So we are now briefly discussing what happens if you take this story and you just propagate it as you go along the architecture. So let's start with random weights. So the idea is maybe something is good happening. Although initially performance was poor, maybe we can improve the performance by cascading such layers. And the answer is no, particularly the noise level. This is now the number of layers. What we discussed before is here. And you see the problem becomes worse and worse. As you continue to propagate those signals, the noise is amplified and essentially goes to 1. So basically you will get just random performance if you keep doing it with random weights. The reason-- where is it? I missed a slide. The reason is, basically, that if you think about the mapping from one layer of noise to another layer of noise, there are two fixed points, 0 and 1. The 0 fixed point is unstable. Everything goes eventually to 1. So it is a nice-- this system gives you a nice perspective about this deep network by thinking about it as a kind of dynamical system. For instance, what is the level of noise at one layer, how it's related to the level of noise at previous layer. So it's kind of iterative map. Delta n versus delta n minus 1. And what's good about it is, once you kind of draw this curve, one layer is mapped to another layer, you can know what happens to a deep network. We could just iterate this. You have to find what are the fixed points, and which one is stable and which one is not. In this case, the 1 is stable, the 0 is unstable. So, unfortunately, from any level of noise that you will start, you eventually go to 1. Correlations, is a similar story, but-- and the error will go to 0.5. So that's very well. There are cases, by the way, that you can find parameters where initially you improve, like here. But then eventually it will go to 0.5. Now, if we do similar-- if we compare this to what happened to the structured weights if you keep doing the same kind of unsupervised Hebbian learning from one layer to another-- and I'll skip the details-- you see the opposite. So here are parameter value in which one stage of the expansion stage is actually increasing the noise. And this is because f is not too small, and the load is large, and the noise is starting. So you can have such situation. But even in such situation, eventually the system goes into stages where the noise basically goes to 0. And if you compare the story why it is so to kind of iterative map picture, you see that the picture is very different. You have one fixed point at 0. You have one fixed point at 1. You have intermediate fixed point at high value. But this is an unstable fixed point, and both of them are stable fixed points. So if you start from even from large values of noise, eventually you will iterate to 0. So it does buy you to actually go into several stages of this deep network to make sure that the noise is suppressed to 0. Similarly for the correlations. Even if the parameters are such that initially correlations are increased, and you can find parameters like that, eventually correlations will go to almost 0. And this is comparison of the readout error as a function of the layers with structured weights, and I compare it with the readout error of infinitely wide layer, kind of a kernel with infinitely wide kernel. And you can see that, at least for-- here I compare the same type of unsupervised learning but two different architectures. One is deep network architecture, and the another one is shallow architecture, infinitely wide. I'm not claiming that we can show that there is no kernel or shallow architecture which will do better, but I'm saying if we compare the same learning rule but with the two different architectures, you'll find that you do gain by going into multiple stages of nonlinearity than by using an infinitely wide layer. I'll skip this. I want to go briefly to two more issues. One issue is the recurrent networks. Why recurrent networks? The primary reason is that, in each one of those stages that I refer to, if you look at the biology, on most of them-- not all of them but most of them, and definitely in neocortex-- you find massive recurrent or lateral interactions between each one of the layers. So, again, we would like to ask, what is the computational advantage of having this recurrent layer. Now, in our case, we had an extra motivation, and this is-- remember that I started in saying that, in some cases, there is experimental evidence that the initial projection is random. So that we ask ourselves, what happens if we do this. If we start from random projection, feedforward projection, and then add recurrent connections. Think about it as from the olfactory bulb, for instance, to piriform cortex, perhaps random feedforward projections. But then the association, recurrent connections in piriform cortex are structured. How do we do that? We start, we imagine starting from random projection, generating initial representation by the random projection, and then stabilizing those representation into attractors by the recurrent connections. And that actually works pretty well. It's not the optimal architecture, but it's pretty well. For instance, noise, which is initially increased by the random projections, were quenched by convergence to attractors. And, similarly, Q will not go to 0, but will not continue growing, but will go to an intermediate layer. And the error is pretty well. So if you look at in this case, the error really goes down to very low values. But now it's not layers. Now it is the number of iterations of the recurrent connections. So you start from just input layer, or random projection, and then you iterate the dynamics and it goes to 0. So it's not the layers. It's just the dynamics of the convergence to attractor. My final point. I have 3 or 4 minutes? OK. My final point before wrapping up is the question of top-down. So recurrent, we briefly talked about it. But incorporating contextual knowledge is a major question. How can you improve on deep networks by incorporating, not simply the feedforward sensory input, but other sources of knowledge about this particular stimulus? And it's important that we are not talking about knowledge about the statistics of the input which can be incorporated into the learning of the feedforward one. But we're talking about inputs which are, or knowledge, which we have now on the network which already has learned whatever it has learned. So we have a mature network, whatever the architecture is. We have a sensory input. It goes feedforward. And now we have additional information, about context for instance, that we want to incorporate with the sensory input to improve the performance. So how do we do that? It turns out to be non-trivial computational problem. It is very straightforward to do it in Bayesian framework, where you simply update the prior of what the sensory input is by this contextual information. But if you want to implement it in the network, you find that it's not easy to find the appropriate architecture. So I'll just briefly talk about how we do it. So imagine you have, again, these sensory inputs, but now there is some context, different contexts. And imagine you have an information that the input is coming from that particular part of state space. So basically the question is how to amplify selectively a specific set of states in a distributed representation. So usually when we talk about attention, or gating, or questions like that, we think about, OK, we have these neurons. We suppress those, or maybe amplify other ones. Or we have a set of axons, or pathways. We suppress those, and amplify those. But what about a representation which is more distributed where you have to really suppress states rather than neural populations. So I just won't go-- again, it's a complicated architecture. But, basically, we're using some sort of a mixed representation, where we take the sensory input and the category or contextual input, mix the nonlinearity, use them to clean it, and propagate this. So it's a more complicated architecture, but it works beautifully. Let me show you here an example, and you'll have a flavor of what we are doing. So now the input, we have those 900 spheres or templates, but they are organized into 30 categories, and 30 tokens per category. Now, the tokens, which are the actual sensory inputs, are represented by, let's say, 200 neurons. And you have a small number of neurons representing a category. Maybe 20 is enough. So that's important, and you don't have to really expand dramatically the representation. So this is the input. And now we have very noisy inputs. If you look at the readout, this is layers here, and there is readout error. If you do it on the input layer, or any subsequent layer here, but without top-down information. With structured interactions and all that I told you, this is such a noisy input where the performance is basically 0.5. There is nothing that you can do without top-down information in this network. You can ask what will be the performance. If you have an ideal observer that looks at the noisy input and makes maximum likelihood categorization. Well, then it will do much better. Also not 0, so this is at this level. This higher error is in virtue of the fact that this network is still not doing what an optimal maximum likelihood observer will do. So this is the network. This is a maximum likelihood readout, both of them without extra top-down information. And in the network that I kind of hinted about, if you add this top-down information by generating mixed representation, you get a performance which is really dramatically improved. And as you keep doing it one layer from another, you really get a very nice performance. So let me just summarize. There is one more before summarizing. Yeah, OK. Before that. OK. So two points to bear in mind. One of them is that what I discussed to you today relies on assuming either random, comparing random projection, to unsupervised learning of a very simple type, of a kind of Hebbian type. The output can be Hebbian, or perceptron, or SVM, and so on. You could ask, what happens if you use learning rules, more sophisticated learning rules for the unsupervised weights? Some of them we've studied. But, anyway, that's something which is important to explore. And another very important issue for thinking about object recognition in vision and in other real-life problem is input statistics. Because what we assumed is a very simple mixture of Gaussian model. So you can think about the task of the network is to take the invariance, which is the variation away from the center of the spherical variation, and to generate representation which is invariant to that. But this is a very simple invariance problem, because the invariance was simply restricted to these simple geometric structures. More problems which are closer to what real-life problems are will have inputs which are, essentially, have some structure, but the structure can be of a variety of shapes. Each one of them correspond to an object, or a cluster, or a manifold representing an entity, a perceptual entity. But how you go from this nice, simple problem of this spherical invariance problem to those problems, it's of course a challenging problem. And that's the work which we are now, ongoing work, also with SueYeon Chung and Dan Lee. But it's a story which is still at the stage of unfolding. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Seminar_42_Anmon_Shashua_Applications_of_Vision.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. AMNON SHASHUA: So unlike most of the talks that you have been given, I'm not going to teach you anything today. So it's not kind going to be teaching type of talk. It will be more towards let's look at the crystal ball and try to see how the future will unfold and future where computer vision is a major agent in this transformative future. So I'll start with transportation. This is the field where Mobileye is active. And then I'll move towards wearable computing, the field where OrCam is active. These are two companies that I co-founded. One, Mobileye in 1999 and OrCam in 2010. So before I just-- a few words about the computer vision. I'm assuming that you all know about computer vision. It's the science of making computers see and extract meaning out of images, out of video. This is a field that in the past 20 years, through machine learning, has made a big jump. And in the past four years, through deep learning, has made another jump where there are certain narrow areas in computer vision and perception where computers reach human level perception and even surpass it. Like facial recognition is one of those areas. And the belief is that in many narrow areas in computer vision within the next five years we'll be able to reach human level perception. So it's a major branch of AI. It goes together with machine learning and as said, a major progress. And one very important thing, which is relevant to the industrial impact of computer vision is that cameras are the lowest cost sensor that you can imagine. A camera sensor costs a few dollars. A lens costs a few dollars. All the rest is computing. And every sensor needs computing. So if you can reach human level perception with a camera you have a sensor that the cost is so low that it can be everywhere. And that this is very-- this is very important. So I'll show you where things are standing in terms of avoiding a collision. So avoiding collision, you have a camera. Behind the windscreen looking, facing forward, analyzing the video coming from the camera. And the purpose of this analysis is to avoid collisions. So what does it mean to avoid collisions? The software needs to detect vehicles, it need to detect pedestrians, need to detect traffic lines, traffic signs, need to detect traffic lights, detect lanes, to know where the car is positioned relative to the lanes. And then send a signal to the car control systems to avoid an accident. So let's look under the hood what this means. So I'll let this run a bit until all the information appears. So if we stop here, what do we see? So the bounding boxes around cars means that the system has detect cars. Red means that this vehicle is in our path. The green line here is the detection of the lane. This is a traffic-- this is a no entry traffic sign. This is a traffic light being detected here. These are the pedestrians and cyclists. Even a pedestrian standing here is being detected. Let's no-- let this run a bit further. All right. So these pedestrians crossing the street. So this is running at about 36 frames per second. So now imagine also the amount of computations that are being running here. Again, this is the traffic sign, traffic light, pedestrians, pedestrians here. So this is-- this is what the system does today, detect objects, detect lane marks, measure distances to the objects. And in case you are about to hit an object, the car would engage. Engage, at first it will give warnings. Then later it will apply automatic autonomous braking in order to avoid the accident. And here is a list of many, many functions that the camera does in terms of detecting objects and detecting-- trying to interpret the visual field at details that are increasing over the years. Now computer vision is also creating a disruption. So if you would ask an engineer say, 15 years ago, what is a camera good for in this space? The engineer would say the camera is good for detecting lanes. Because there's no other sensor that can, you know, find the lane marks, not a radar, not a laser scanner. And it may be good for helping the radar infusion-- radar camera fusion to compensate for shortcomings of the radar. Traffic signs, OK it, will be good for traffic signs. But that's it. But what happened over the years is that the camera slowly started taking territory from the radar. Until today, the camera is really the primary sensor for active safety. Active safety is all this area for avoiding accidents. And you can see this through this chart. So in 2007, now we launched the first camera radar fusion. So there's no disruption there. This is what normally people would think a camera is good for, combining with a radar. 2008, camera is also doing traffic sign recognition. No disruption there. 2010, camera's doing pedestrian detection. No disruption there because there's no other sensor that can reliably detect pedestrians. Because they emit radar very, very weakly. And pedestrians mostly stationary object. And radars are not good at detecting stationary objects. But then in 2011, there's the first camera only forward collision warning. And that was the beginning of a disruption. So forward collision warning is detect a vehicle in front and provide a warning if you are about to collide with a vehicle. And this was a function that typically was in the territory of radars. So a radar sensor is very good at detecting vehicles, very good at ranging, very, very accurately can get the range of a vehicle, say 100 meters away up to an accuracy of a few centimeters. No camera can reach those accuracies. So nobody believed that one day a camera will take over the radar and do this function. And this is what happened in 2011. And why did this happen? This happened because of a commercial constraint. The regulator, the American regulator, it's National Highway Safety Transportation Agency, NHSTA, decided that by 2011 all cars need to have as an option two functions, forward collision warning and lane departure warning. Now this creates a problem because forward collision warning requires a radar. Lane departure warning requires a camera. So now put two sensors in the car, it's expensive. If you can do it with one sensor, like with a camera, then you save a lot of money. So this pushed the car industry to adopt the idea that the camera can do forward collision warning. And like all disruptions, once you start small you kind of grow very, very fast. So in 2013 the camera is not only providing warning, but also safe distance keeping to the car in front. It's called adaptive cruise control. Then 2013 also provides emergency braking. So the camera not only decides that you're about to collide with a vehicle, it will also apply the brakes for you. So 2013 was only partial braking. So to avoid the accident up to 30 kilometers per hour. And then in 2015, this was few months ago, the camera is now involved in full braking. It's one G-Force of braking avoiding an accident of about 70 - 80 kilometers per hour. And mitigating an accident up to 220 kilometers per hour, just the camera. So the camera is taking over and becoming the primary sensor in this area of active safety. Now, why is that? As I said, these are the milestones of the camera disruption, is first the camera has the highest density of information as a sensor. You know, laser scanner or radar, the amount of pixels per angle, per degree is much, much smaller. It's orders of magnitude smaller than a camera. So you have a lot, a lot of information from the camera. It's the lowest cost sensor. And also the cameras are getting better in terms of performance under low light. So with a camera today you can do much more, not only because computing has progressed, not only because algorithms are now better, but also because the physics of the camera are progressing over time, especially the light sensitivity of the camera. So we also came to the conclusion that we need to build our own hardware and our own chip. And these are very, very advanced microprocessors that they can per silicon area are about 10 times more efficient than any general purpose chip. And I'll not spend more time on this. And so this field has two major trends. One, on the left hand side, is the active safety, which is driven by regulators. So the regulators see that there is a sensor that is very low cost and saves lives. So what does the regular do? They incentivize this kind of function to the car industry by coupling it to star ratings. So if you want to get your four star or five stars the NCAP stars on the car, you have to have this kind of technology as a standard fit in the car. So this pushes the industry by mandates. It pushes the industry to have active safety installed in every car. So by 2018 every new car would have such a system. The left hand side is the trend to the future, which is autonomous driving. Now autonomous driving has two facets. One is bringing the probability of an accident to infinitesimally small probability. So zero-- zero accidents. Because the more you delegate the driving experience to a robotic system, the less the chance of an accident. So it brings us to an era where there will be no accidents. But not less importantly, it has the potential to transform the entire transportation business. How we own cars, how we build cars, the number of cars that would be produced. And I'll spend a bit more time about that as I go forward. Now, in terms of the left hand side, the regulation, this is an example. So you see here a Nissan Qashquai 2014 has five stars. And to know how to get the five stars what you see here, these are all the tests. These are autonomous emergency braking tests. The car needs to detect the car in front, the target car, and apply the brakes before the collision. And the car is being tested. And without that they'll not get the five stars. You can see this also in the number of chips that have been shipped. So every car has a chip. So this chip, the microprocessor, is getting the information from the camera and all the algorithms are on this microprocessor. So we started launching this in 2007. So in the first five years there were one million chips, so one million cars with the technology. And then in 2013 alone, 1.3 million. Then you see here, 2014, 2.7. This year is going to be about five million. So you see this doubling. And this is really the effect of the regulation. So in many industries regulation is an impediment. In this industry, regulation is something good. It pushes the industry to install these kinds of systems, you know, standard. OK. So another example of how this is moving, there's also an increasing awareness. So this is a commercial from 2014 Super Bowl by Hyundai. So Hyundai is showcasing their new vehicle called Genesis. Now, there are many things that you can show when you want to showcase a new vehicle. You can talk about the design of the vehicle. You could talk about the engine, the infotainment. But they chose to talk about the active safety. So I'll show you. [VIDEO PLAYBACK] - Remember when only Dad could save the day? Auto emergency braking on the all new Genesis from Hyundai. [END PLAYBACK] AMNON SHASHUA: OK. So this is the camera behind the windscreen, detecting the car in front, or a pedestrian, and will break before the collision. Now to show you what this is about-- so that was the commercial. So in a commercial you can show anything you like. So now I'll show you something really from the field. So in 2010 Volvo introduced the first pedestrian detection. So the same thing, detect a pedestrian and if you are about to collide with a pedestrian the car would brake, apply the brakes automatically. So in 2010 they had about 5,000 journalistic events, where they put a reporter behind the steering wheel, tell the reporter to drive towards a mannequin, toward a doll, and low and behold the car would brake just before-- a fraction of a second before you hit the doll. But then when you buy the car you can do your own testing. So I downloaded from the internet, its a bunch of Polish guys. And it's a bit funny but you'll actually get a good feeling of what this system does by looking at this clip. OK? So this is automatic emergency braking. Today it works, it avoids accidents about 70 kilometers per hour. OK? So now you have a better idea of what I'm talking about. So now let's go into the future. So this was just setting the baseline. OK. What is active safety? Where's computer vision inside this? So now let's look in the next four years. And the idea is to evolve this kind of technology to a point where you can delegate the driving experience to a robotic system. And then the question is, what needs to be done. And this slide shows that there are two paradigms. And the reality is somewhere in between these two paradigms. The right hand side is where we are today. You are based only on sensing. You have camera. Maybe you have also a radar, or laser scanner for redundancy. You get the information from the sensors. You have algorithms that try to interpret the visual field and take action in case of an accident, or control the vehicle. On the left hand side is the extreme case, is the Google approach, where there is little sensing involved. It's a lot of recording. So you prerecord your drive. Once you have prerecorded the drive all what you need to do is to match your sensing to the prerecorded drive. Once you've found the match, you know your position exactly. So you don't need to know to detect lanes. You know all the moving objects. Because the recording contains only stationary objects. So all the moving objects pop out. So that the load on the sensing is much, much smaller than in the case where you didn't do it pre-drive and you didn't record. This recording, the problem with the recording, is that we are talking about tons of data. It's a 360 degree, 3D recording, at several frames per second. So the amount of data is huge. So there's issues of how you manage this, how you record it, and how you update this over time. Because you have to continuously update this kind of data. And reality is going to be somewhere in between. So the first leap that is undergoing and happening in the next five years is to reach human level perception. Now it sounds very, very ambitious. But there's lots of indications that it is not science fiction. It is really-- there is a very high probability that one can reach this. So in certain areas, like face recognition, certain categorization tasks-- now if you look at the academic achievements, they have surpassed human level perception. I'll spend a few slides on this later. So going from adaptive-- going from driver assist to human level perception, first, we need to extend the list of objects. Not only vehicles and pedestrians, but vehicles at any angles, know about 1,000 different object categories in the scene, know about how to predict a path using context, which today is not being used. Detailed road interpretation, knowing about curbs and barriers and guards, it's all the stuff that when we look at the road we naturally interpret it very, very easily. These are the things that needs to be done in order to reach human level perception. And the tool to do this is the deep layered networks, which I'll spend a few slides about this in a moment. And the need for context, so these are examples, for example, path planning. You want to fuse all the information available from the image, not only to look for the lanes, because in many situations you look at an image you don't see lanes. But a human observer would very easily know where the path is just from looking at the context. In modeling data environment, ultimately every pixel give you a category. Tell me where this pixel is coming from, a pedestrian, from a vehicle, from inside of a vehicle, barrier, curb, guardrail, lamp post, so forth and so forth. 3D modeling of a vehicle. So put a 3D bounding box around the vehicle so that we can know which side of a vehicle I'm looking at, whether it's the front, or rear, left side, right side, what is the angle. Know everything about vehicles as moving objects and do a lot of scene recognition. I'll give some examples about that later. So just deep networks, I know that you have-- you all know about deep networks. I'll just spend a few slides just to state what is the impact there, not the impact from the point of view of a scientist, but the impact from the point of view of a technologist. Because there isn't much science behind this. So the real turning point was 2012. 2012, you know, the AlexNet, they show they built a convolutional net that was able to work on the ImageNet data set and reach a performance level, which was more or less double the performance level of what was done before. This is another network by Fergus. Very, very similar concept of convolution pooling. Convolution pooling two-three dense layers and you get the output. This is the ImageNet data set. You have about 1,000 categories over one million images. And these categories are very challenging. You know, you look at the images of a sailing vessel or images of a husky, the variation is huge. It's a really very difficult task. 2011 the top five-- so the task is that you need to give a short list of five categories. And if the correct categories is among the top five then you succeeded. And the performance was about 26% error. And this AlexNet reached 16%. So it's almost double the performance. So this caught the attention of the community. It's a big leap from 26% to 16%. Now if you look what happened since then, so 2012 for this ImageNet competition there was one out of six competitors use deep networks. A year later 17 out of 24 competitors used deep networks. A year later 31 out of 32 using deep networks. So deep networks took over basically. If you look in terms of the performance, of the human performance is about 5%. And right now we are at 6%, 5%, by the latest 2015 competitors. People start cheating. So I think this is more or less Baidu was caught cheating on this test. So I think 5% is more or less where things are going. And this is human level perception. Another-- another big success was the face recognition. So this is a data set called face recognition in the wild, which contains pictures of celebrities where every celebrity you have pictures, you know, along a spectrum of many, many years. You can see the actor when he was 20 years old and then when he's 70-80 years old. Even for humans, this task is quite challenging, knowing whether two pictures are from the same person or not. And the human level performance is 97.5% correct. Now if you look at techniques not using deep networks, they reached 91.4%. And in 2014 a group by Facebook and Lior Wolf from Tel Aviv University, they built a deep network to do face recognition and reached 97.3%, which is very, very close to human perception. And since then people have reach 99% on this database. And again, human level perception is 97.5. So this is another area where these deep networks, also in speech. This is a recent paper by Baidu headed by Andrew Ng. They surpassed, just doing an end to end network which learns also the structured prediction, better performance than Siri, Cortana, Google Now. OK? So the impact for automotive is that networks are very good at multi-class So the more categories you have, the better the performance of the network would be. Very good at using context, imagining or planning a path. So taking an image as an input and output would be the path. And you're cutting short of all the processes of looking for lanes and this kind of algorithms. Network will be ideal for pixel level labeling. For every pixel give me a category. And you can use the networks for sensor integration, for determining the control of the vehicle by fusing a lot, a lot of information coming from various cameras. So the challenge of using deep networks is that deep networks are very, very large. They're not designed for real time. The networks that you find in academic papers and the success are for easy problems. The problems that I've shown right now, the ImageNet, the face recognition, are relatively considered easy problems in the context of interpreting the image for autonomous driving. So let me show you what are the things that one can do. Let's start with the path planning. So this clip that I'll show you here the purpose of the network is to determine the path. So this is the green line. Now these clips are from scenes where it will be impossible to detect lanes. Because there's simply no-- simply no lanes. If you look at this, any lane detection system would find nothing in this kind of scene. Yet when you look at this image, you have no problem in determining where the path is. Because you're looking at the entire image context. And this is what the network is doing. It's being fed the input layer is the image, the output layer is this green line. Or for example, if you at this urban setting. There are no lanes in an urban setting. Yet the system can predict where the path is by fusing information from the entire context. These are roads in California where they have these reflectors called Botts dots. It's almost impossible to reliably, you know, fit lanes to these kinds of information. Yet if you look at this holistic path planning it reliably can tell you where it is. Let's look at free space. So free space, the idea of when you want to do autonomous driving you need to know where not to drive. Right? You don't want to drive towards the curb. It's not only that you don't want to hit other moving objects. That's the easy part. You don't want to hit a barrier or a guardrail. So you want to know where the free space is. So you can think of a network that for every pixel will give you a label. And let's now focus only on the label of road versus not road. So all the pixel green are road. Everything else is not road. So you can see that the green is not going over the curb, which is-- which is nice. But let's have it run a bit more. And then I'll stop it at the place where you'll see where the power of context. Says let's assume I stop it here. Now look at the sidewalk there. The color of the sidewalk and the color of the road is identical. The height of the curb is about one centimeter. So it's not that the height here, the geometry-- it's basically the context. The network figured out because there is a parked car there. That part is not part of the road. So in order to make this judgment correctly, one needs to not just look at a small area around the pixel and decide whether it's road or not road. One needs to collect information from the entire image. This is the power of context. And this is something that the network can do. You can see here, where the blue and red lines. Red means it's on a vehicle. Blue that it's on a physical barrier. So if I run this back here-- and this is done frame by frame. So it's a single frame thing. Same thing here, this height is one or two centimeters. The color of the sidewalk and the color of the road it's identical. So being able to make the correct judgment here is very, very challenging. And this is where a network can succeed. Here the network also predicts that this is a code for being a curb. The red is the side of a vehicle or front of the vehicle. And the next one it predicts that this is part of a guardrail, the coding of this is part of a guardrail. So the system has about 15 categories, guardrail, curb, barrier, and so forth. Let's keep the questions for later. And so forth. So this is one area we call semantic free space. So for every pixel in the scene tell me what it is. Of course, I'm interested-- first and foremost I'm interested to know where the road is. And then at the edges of where the road ends to know what is the label. Is it a side of a vehicle, front of a vehicle, rear of a vehicle. Is it a curb barrier guardrail and so forth. And this, again, is done by deep network. I'll skip this one. And then you can apply this from cameras from any angle. So this is a camera looking at a corner, looking at the 45 degrees on the right. So the system can know where the free space is. This is a camera from the side, with a fish eye. Again, using the same kind of technology the system can know where the free space is. Same thing here. Here as well, day night. 3D modeling, 3D modeling is to be able to put a bounding box, a 3D bounding box, around the vehicle. And the color here is that the green is front, red is rear, blue is right hand side, and yellow is left hand side. If you let this run-- all right. Now the importance of putting a 3D bounding box around the vehicle is that now you can place a camera at any angle. So it's not only camera looking forward, but the camera at any angle, because the way a vehicle is defined is invariant to the camera position, so this is kind of a preparation for putting cameras all around the vehicle at the 360-- 360 degree. Scene recognition, for example, being-- to know that this is a bump, is also being done by a network that takes an image and outputs where the bumps are. The same thing-- same thing here. More complicated than that is knowing where this top line is. So when you go and detect traffic lights-- so detecting traffic lights is the easy problem. A more complicated problem is to know the relevancy of the traffic lights, which traffic light is relevant to what direction. The third one, the most difficult problem, is to detect the stop line. The problem with stop line is that when you see the stop line it's a bit too late. You see this stop line 20-30 meters away. So it's too late to start stopping and have a smooth stopping. You want to predict where the stop line is 60-70 meters away. So here, you want your algorithm, or your network, to understand that you are approaching a junction and start estimating where the stop line should be so they can start slowly reducing your speed, such that by the time you see where the stop line is you already reduced your speed considerably. I'll skip this. Knowing lane assignment. Knowing how many lanes are and which lane you are is also done by a network. So the network will give a probability whether that this is a lane, this is a lane. For example, it knows that this is not a lane. It has here red, zero probability. So as you can see here-- I'll skip this one. So these networks, so for every task there is a network. And these networks are quite sophisticated in accessing, integrating a context at traffic light. I'll skip this with traffic light. So multiple cameras, this is how it is-- it looks like. You have the red ones are three cameras behind the windscreen. One is about 180 degrees. The other one is about 50. The third one is about 25 degrees. And then there are another five cameras around the car that give you all 360 degrees. And this kind of configuration, first launch of it, in a series produced car, is going to be 2016. So I'm not talking about science fiction. These are how images look from some of these cameras. So let me show you a first clip of automated driving. This is kind of a funny-- funny clip. This is an actor who played a major role in Star Trek. So I'll not say his name. Let's see whether you can identify him yourself. And he has a program called-- program for kids called Reading Rainbow. So this program is 20 years old. And he came to Israel and he wanted to drive the autonomous vehicle that we have for his kids program. So he was driving my car. So my car is autonomous. I can drive from Tel Aviv to Jerusalem without touching the steering wheel. It's-- I do that all the time. So he was driving it. And it's a bit funny. So let's-- but you'll get a feeling of what this is. So let's run this. It's two minutes. [VIDEO PLAYBACK] - Yes. They can. That's because technology companies, like Mobileye here in Israel, are about to introduce self-driving technologies to the world. AMNON SHASHUA: You know who he is? - In the not too distant future, just like in a science fiction movie. A driver will be able to hop in a car, tell it where you want it to go, and voila, the car will do the rest. So right now I'm driving like everybody does. My hands are on the steering wheel and my foot is on the brake, or the pedal, as required. And I'm in control of the car. But when I take my foot off the pedal and do this, now the car is driving itself. Wow. This really is amazing. I feel really safe with the car doing all of the driving. OK. Now watch this. And this is something that no one should ever do in a regular car, ever. Wow. That was freaky. [END PLAYBACK] AMNON SHASHUA: OK? So anyone from the young people know who he is? So this is Jordy, from Star Trek. He had this visor. He was blind. He had a visor. OK. So let's spend a few minutes to talk about what is the impact of autonomous driving and how it's going to unfold. So this is far from science fiction. It's actually unfolding as we speak. The first hands free driving on highways is coming out now. The first one is Tesla. They have already launched-- they made this public a week or two ago. Their first beta drivers are driving with the system. And I presume within a month it will be also installed to all other drivers. And this is-- you can do hands free when driving on a highway, unlimited speed. So you can drive at highway speeds, let go of the steering wheel, and the car will drive. GM already announced that middle of 2016 they have the super cruise, more or less the same kind of functionality. Audi also announced 2016. And these are just the first comers. We are working with about 13 car manufacturers that within the next three years, three to four years, having this kind of capability. So this will be in the mainstream. Now what I put there in red is that the driver still has primary responsibility and has to be alert. That means that the technology is not perfect. It could make mistakes. Therefore, the driver has to be-- is still the primary-- has the primary responsibility. So at this stage there's no disruption here. It's just a nice feature to have. For the car industry, this is the first step to start practicing towards reaching autonomous driving. The second step starts 2016, and this is with the eight cameras that I showed you slide before. Here, the car can drive autonomously from highway to highway. So on ramp, off ramps, are done autonomously. So you-- you with Google Maps or whatever navigation program you chart your route, and until the car reaches city boundaries it will go all-- go autonomously. From highway to highway it will switch from highway to highway and do that autonomously. Still, the driver has primary responsibility, and is alert. So there's no-- nothing here is transformative. It's a nice feature. Again, it's part of a phased approach of the car industry to start practicing. Starting from 2018, would come the first small disruption. The first small disruption is that technology would reach a level in which driver is responsible. The driver must be there but not necessarily alert. So it means that the driver is an attendant. The driver is monitoring just like a pilot sitting in an airplane while the plane is in auto-pilot. The driver needs to be there in case there is a problem. The system will give a grace period of time until the driver needs to take back control. So it's not taking control in instant-- immediately. And so this transition from primary responsibility to monitoring, like in aviation, will be the first disruption, the beginnings of a disruption. So let's try to imagine what kind of disruption this is. So let's take Uber as an example. So today you have free time. So you own a car. You have a free time, say between 3:00 PM to 5:00 PM So you take your car and apply Uber and take passengers and earn some money. That's Uber today. Now let's look at 2018 - 2019. You have zero skills and you don't have a car. All what you have, you have a driver license. So you are willing to be an attendant. So you say, OK, now I have free time. An Uber car would come with an attendant. You switch places with the attendant. You sit behind the steering wheel and you do nothing. You don't control the car. You don't control the passengers who are coming who are being taken by the car. You simply sit there. Zero skills, therefore your payment is very, very small. So now these cars can drive 24/7 because attendant can be replaced every hour or so. So here we have another business model which makes this public transportation, the Uber type of public transportation, now much more powerful than it is today. So this is kind of the beginning of disruption. What will be the next step? The next step 2020-2022, imagine that a driverless car can drive without passengers. So this is one step before you can allow a car to drive autonomously. So without passengers means that all what you need to prove is that the car, your car, would not hit other cars or pedestrians. But if it hits an infrastructure nobody gets killed because there are no passengers in the car. No passengers meaning nobody in the car. Now this is already a major disruption because what it means, it means that the household does not need to own multiple cars. One car is enough. I drive to work with the car. I send the car back home. Takes my wife, take her to work, comes back home. You get the picture. So this is kind of a beginning of a major disruption. Then about 2025 - 2030, sufficient experience with mapping data, car to car communication, one can imagine how these cars would be completely autonomously. And that is where the major disruption happens. OK? So this is autonomous driving. Let me go to the second part about wearable computing. And then we can take questions. So this will be much shorter. So again, computer vision, but now the camera is not beside us, like in the car. The camera is on us. Now if the camera is on us, the first question that you would ask, who needs a camera to be on you? Right? So the first market segment for something like this are the blind and visually impaired. So the way to imagine this. you are visually impaired or a blind person so you don't see well or you don't see at all, or you don't see well. So it's very, very difficult for you to negotiate the visual world. You cannot read anything unless it's few centimeters from your eye. You cannot recognize people unless they start talking to you. So you can recognize their voice. You cannot cross the street because you don't see the traffic light. You cannot go on a bus because you don't think what the bus number is. So basically you are very, very constrained, very limited. Now let's assume that you have a helper standing beside you. Now this helper is relatively intelligent and has correct eyesight. Now the helper looks at you, sees where you are pointing your hands, for example, or pointing your gaze, looks at the scene, understands what kind of information you want to know, and whispers to your ear the information. So say you want to catch a bus. You know that the bus is coming because you hear the bus, maybe you see a silhouette. So you look at that direction. The helper looks at the bus. It sees that there is a bus. Tells you what the bus number is. You want to cross the street. You know the traffic light is more or less there. But you cannot-- you don't know what the color of the traffic light is. So the helper looks at your gaze, sees that there's a traffic light there. Tell you it's a green light. You're opening a newspaper. You point someone on the newspaper, the helper would read you the article. Or there is a street name. You point towards the street name. The helper would look at the scene, understand that there is a text in the wild, and simply read you the street name. A familiar face appears, the helper will whisper, you know, Joe has now-- is now in front of you. And so forth. So if you have now replaced this helper with computer vision you can imagine how this could help someone who is visually impaired. So let me show you-- so first of all, the number of visually impaired is quite big. So the number of blind people in the US is about 1.5 million. That's not big. The number of visually impaired, and it's people that their ailment cannot be corrected through lenses, is about 26 million. So this is a sizable number. World wide is above 400 million people who are visually impaired. And they don't have much technology to help them. So this is what OrCam is doing. It's a camera which clips on eyeglasses. And there is a computing device, which you put in your pocket. And the way you interact with the device is with your hand, with your finger. Because the camera is on you it could see also your hand. Once you point, the camera starts to extract information from the scene and talks to you through an earpiece. So let's look at the clip. [VIDEO PLAYBACK] - Hi. I'm Liette and I'm visually impaired. I want to show you today how this device changed my life. - Massaryk. - Great. Let's go there. - Red light. Green light. 50 shekel. - 50 shekel. Let's buy some coffee. - Breakfast. Bagel plus coffee with cream cheese [INAUDIBLE].. [END PLAYBACK] AMNON SHASHUA: OK? So you get the idea. So we started 2010. 2013 we had already a prototype working. And we had a visitor, John Mark from the New York Times, and he came and he wrote a very nice article about what the company is doing. And we thought that at that time it would be good to launch the website of the company and try to get a number, say, 100 first customers, so that we can start experimenting, do field studies with a prototype device. So we launched the web site. We wrote that the device cost $2,500. That was June 2013. And the first 100 people who would purchase the device will receive the device in September. So within an hour those 100 devices were sold. And then we kept a waiting list, which today is about 30,000. And we started shipping the devices about a month ago. So in the last year this device was with about 200 people. And we got a lot of feedback from real users and improved. And let me show you some real users. So this is Marcia from Brazil. The device at the moment only works in English. Later we'll put more languages. And so she's being trained to use a device. And this is a short clip of about two minutes. And, you know, watch her body language. And also she explains how she copes with her disability, especially how she distinguishes between different money notes. They're all green. So how do you distinguish between them? So let's have a look at this. So the device is reading the newspaper for her. [VIDEO PLAYBACK] - [INAUDIBLE] - $50. - $50. Cincuenta dollars. - Cincuenta. Let's see if [INAUDIBLE]. - It green. All green and I put mark color, yellow, green, orange. Different note. [INAUDIBLE] - $20. [? Genia ?] - [INAUDIBLE] [END PLAYBACK] AMNON SHASHUA: OK. Here's a recent-- from CNN. It was aired a month ago. It also gives a bit more information about the device. Let's run this. It's again two minutes. [VIDEO PLAYBACK] - Two weekends ago I sat down and read The New York Times. I haven't done that in maybe 30 years. My wife came down. I had a cup of coffee. I'm reading The New York Times and she was crying. - Just being able to read again is emotional for Howard Turman. He started losing his vision as a child. His new glasses don't fix his eyes but they do the next best thing. - Put on my glasses, it recognizes the finger, snaps the picture. Now it just reads. - The glasses have a camera that recognizes text and can read the world to him. - Pull here. - The technology is called OrCam and Turman says it gives him a sense of normalcy. - Even finding out that Dunkin' Donuts has a donut I never tried was exciting. - Dunkin' Donuts. - It's a clip on camera. So a camera that you can clip onto any eyeglasses. And you have here a computing device, which you can put in your pocket. And the way it interacts, it's with a hand gesture. For example, it's written there, rental and tours. - Rentals and tours. - It's not perfect though. It uses a pretty bulky cable and sometimes it needs a few tries to get things right. - It doesn't read script because everybody's handwriting is different. So it doesn't do cursive very well at all. - OrCam has a harder time in bright light, or in tougher situations, like signs on windows. - [INAUDIBLE] U donuts hours of operation. Low PM. Pound's PM. 9:00 PM. How was your service today? - Shashua says improvements are on the way. Where do you see this technology going over the long term? - Reading, recognizing faces, recognizing products, is only the beginning. Where we want to get is complete visual understanding at the level of human perception, such that if you are disoriented you can start understanding what's around you. For example, where's the door? The door is there. Where is the window? Where is an opening in the space around me? OK? This is face recognition. So again, one of the first 100. - Teach OrCam to recognize anybody? - Yep. - Who does it know? - Libby, my mother. - You want to show me? - Yep. OK. - All right. [INAUDIBLE] Let's see. [INAUDIBLE] - Libby. [END PLAYBACK] AMNON SHASHUA: OK? So that's also face recognition. Last two slides. We started also providing the device to research groups. And this is one of-- this is a paper in ARVO where they took eight visually impaired and gave them the device for one month. And then measured the change in quality of life. And how they measure the change of quality of life, they interview them. And seven out of the eight reported significant change in quality of life. Now they sent us some of the interviews. So on the next-- here, I'm showing you part of the interview. And what's interesting about this interview is that there is a trick question. The interviewer, after she tells him how the device is, you know, lifesaving and so forth, he tells her, well the device is very expensive. It's a few thousands of dollars. Is it worth it? So it's one thing to get something for free and say, it's very, very good. Another thing's is it's going to cost you thousands of dollars, is it worth it? And let's hear her answer, which is very nice. [VIDEO PLAYBACK] - In the first few days I had the OrCam I was in total awe of it because for the first time I was able to open mail and read it, instead of having my husband read my mail. And I was able to go to a restaurant and actually read the menu and order myself with the waitress. And that was exciting. When you can't do something for such a long period of time, the OrCam was incredible. - Believe is what the estimate is. Do you think such a high price would be something people would be willing to pay for a device like this? Do you think it's marginally worth it right now? - I think you're going to find that that's going to be on a case by case basis. You know, people who have money there's certainly no problem $2,000. I don't have money. I am low income. But I would save my money, scrape it together in order to get it at $2,000. [END PLAYBACK] AMNON SHASHUA: So that's interesting. Where is it going? So there are two lines of progress. One, is when this existing niche is to make the camera understand the visual field at higher levels of detail. So one of the things that we are now working on is, we call this chatting mode. So it's like-- it's the image and notation type of experiment, or the ImageNet together with natural language processing. Say you are visually impaired or blind and you're disoriented. You don't know where you are. So you would like the device to tell you every second what it sees. I see here Tommy. I see here chairs. I see here another person. I see here a wall, an opening, a painting, blah, blah, blah, blah, blah, until you get back your sense of orientation. So you want the device to be able to have say, several thousands of categories, like in ImageNet, together with image annotation capability. The kind of stuff that people are now writing articles about. And being able to do this at the frame rate of say, once per second. So wherever I'm looking at tell me what-- what you see. This is one-- another thing is to have natural language processing, NLP ability. For example, if you are looking at an electricity bill. The system would know that you're looking at an electricity bill and give you just the short-- what is the amount due, for example. The system will tell you are looking at an electricity bill. The amount due is such and such. So having more and more intelligence into the system. So this is one area. Another area is to go for a wearable device for people with normal sight. So here we're talking about, you know, real wearable computing. So this Apple Watch is wearable computing. But doesn't do much computing. Right? It displays, you know my text messages, emails, you know, measures certain biometrics. But that's not, you know, the holy grail of wearable computing. The holy grail of wearable computing is assume that you had Siri with eyes and ears. So you had a camera on you that is observing the scene all the time and providing you real time information whenever you need the information, like the people that you meet. What were the recent tweets of those people that you met? What is common between you and them based on Facebook and LinkedIn and so forth? So knowing more about the people you meet. Knowing more about the stuff that you are doing. And creating an archive of all what you are doing throughout the day. And this is a device like this. This is how it looks like. We call it Cassie. So this is a real device. It works continuously for about 13 hours. So you have a camera working continuously for 13 hours. And the purpose of this camera-- so the way-- you put it like this. OK? So the purpose of the camera is not to take pictures. It doesn't store any picture. The purpose of the camera is to be a sensor, is to interpret the visual world and provide information in real time. And if everything goes well, we'll start launching this within six months from now. So this is the next big thing, to go into wearable computing, to go into a domain in which a camera is on you and processing information all the time. Unlike now, a camera on your smartphone in which you take a picture on demand. It's not working for you all the time. Here it's working for me all the time. All the time it's viewing the visual field. Whenever it finds something interesting it will send it to my smartphone, like people that I meet and other activities that I do. So this will be the beginning of real wearable computing. So wearable computing with sensing, with the ability to hear and listen, to hear and see, and process information in real time. This is the next thing that-- the next big challenge that we are working on. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_91_Tomaso_Poggio_iTheory_Visual_Cortex_Deep_Networks.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. TOMASO POGGIO: So I'll speak about i-theory, visual cortex, deep learning networks. The background for this is this conceptual framework that we take as a guide to present work in vision in this center-- The idea that you have a phase in visual perception, essentially up to the first saccade-- say, 100 milliseconds from onset of an image-- in which most of the processing is feedforward in the visual cortex. And that top-down signals-- I hate the term feedback in this case, but back projections going from higher visual areas, like inferotemporal cortex, back to V2 and other cortical areas are not active in this first hundred milliseconds. Now, all of this is a conjecture based on a number of data. So it has to be proven. For us it's just a motivation, a guide, to first studying feedforward processing in, as I said, the first 100 milliseconds or so. And to think that other types of theory, like generative models, probabilistic inference that you have heard about, visual routines you have heard kind of from Shimon, are important not so much in the first 100 milliseconds, but later on. Especially when feedback through back projection, but also through movements of the eyes that acquire new images depending on the first one you have seen, come into play. OK. This is just to motivate feedforward. And of course, the evidence I refer to is evidence like-- you have heard from Jim DiCarlo, for the physiology there is quite a bit of data showing that neurons in IT become active and selective for what is in the image about 80 or 90 milliseconds after onset of the stimulus. And this basically implies that there are no big feedback loops from one area to another one. It takes 40 milliseconds to get to V1, and 10 milliseconds or so for each of the next areas. So the problem is, computational vision-- the guy on the left is David Marr. And here it's really where most probably a lot of object recognition takes place, is the ventral stream from V1 to V2, V4, and the IT complex. So that's the back of the head. As I said, it takes 40 milliseconds for electrical signals to come from the eye in the front through the LGN back to neurons in V1. Simple complex cells. And then for signals to go from the back to the front, that's the feedforward part. And on the bottom right, you have seen this picture already. This is from Van Essen, edited recently by Movshon. It's the size of the areas and the size of the connection are roughly proportional to the number of neurons and fibers. So you see that V1 is as big as V2. they both have about 200 million neurons. And V4 is about 50 million, and the inferotemporal complex is probably 100 million or so. Our brain is about one million flies. A fly is around 300,000 neurons or so. A bee is one million. And as I think Jim DiCarlo mentioned, there are these models that have been developed since Hubel and Wiesel-- so that's '59-- that tried to model feedforward processing from V1 to IT. And they start with simple and complex cells, this S1 and C1, simple cells being essentially equivalent to Gabor filters, oriented Gabor filters in different positions, different orientations. And then complex cells that put together the signals from simple cells of the same orientation preference, but different position, and so have some more position tolerance than simple cells. And then a repetition of this basic scheme, with S2 cells that are representing more complex-- let's call them features-- than lines. Maybe a combination of lines. And then C2 cells again pulling together cells of the same preference in order to get more invariance to position. And there is evidence from the old work of Hubel and Wiesel about simple and complex cells in V1. So S1 and C1, although the morphological identity of complex and simple cells is still an open question-- you know, which specific cells. We can discuss that later. But for the rest, this hierarchy continuing in other areas, like V2 and V4 and IT, this is one conjecture in model like this. And we, like other ones before us, modeled back 15 years ago this different area. It's V1, V2, and V4 with this kind of model. And the reason to do so was not really to do object recognition, but it was to try to see whether we could get the physiological properties of a different sense in such a feedforward model, the ones that people have had recorded from and published about. And we could do that to reproduce the property. Of course, some of them we put in properties of simple and complex cells. But other ones, like how much invariance to position there was in the top level, we got it out from the model consistent with the data. One surprising thing that we had with this model was that, although it was not designed in order to perform well at object recognition, it did actually work pretty well. So the kind of things you have to think about this is rapid categorization. You have seen that already. And the task is, for each image, is there an animal or not? And you can kind of get the feeling that you can do that. In the real experiment, you have an image and then a mask, another image. And then you can say yes, there is an image, or no, there is not. This is called rapid categorization. It was introduced by Molly Potter, and more recently Simon Thorpe in France used it. And it's a way to force the observer to work in a feedforward mode, because you don't have the time to move your eyes, to fixate. There is some evidence that the mask may stop the back projections from working. So this is a situation in which you could compare human performance to these feedforward models, which are not a complete description of vision anyway, because they don't take into account different eye fixation and feedbacks and higher processes, like-- like I said, probabilistic inference and routines. Whatever it happens, very likely in normal vision, in which you have time to look around. So in this case, this d prime is a measure of performance, how well you're doing this task. And you can see, first of all, the absolute performance, 80% correct on a certain database. This task, animal no animal, is similar between the model in humans. And images that are difficult for people, like images in which there is a lot of clutter, the animals are small, are also difficult for the model. And the easy ones are easy for both. So there is a correlation between models and humans. This does not say that the model is correct, of course, but it gives a hint that model of this type capture something of what's going on in the visual pathway. And Jim DiCarlo spoke about a more sophisticated version of these feedforward models, including training with back propagation, that gives pretty good results also in terms of agreement between neurons and units in the model. So the question is why these models work. They're very simple, feedforward. It has been surprisingly difficult to understand why they work as well as they do. When I started to work on this kind of things 15 years ago, I thought this kind of architecture would not work. But then they worked much better than I thought. And if you believe deep learning these days, which I do-- for instance, in performance on ImageNet-- my guess is they work better than humans, actually, because the right comparison for humans on ImageNet would be the rapid categorization one. So they present images briefly. Because that's what the models have-- just one image. No chance of getting a second view. Anyway, that's a more complex discussion that has to do also with how to model the fact that in our eyes, in our cortex, every-- solution depends on eccentricity. It's a pretty rapidly decaying resolution as you go away from the fovea, and has some significant implications for all these topics. I'll get to that. What I want to do today is, one way to look at this to try to understand how these kind of feedforward models work-- i-theory is based on trying to understand how models that are simple and complex cells and can be integrated in a hierarchical architecture can provide a signature set of features that are invariant to transformations observed during development, and at the same time keep selectivity. You don't lose any selectivity to different objects. And then I want to see what they say about deep convolutional learning networks, and look at some of the-- beginning with theory about deep learning. And then I want to look at a couple of predictions, particularly related to eccentricity-dependent resolution coming from i-theory, that are interesting for the sake of physics and modeling. And then it's basically garbage time, if you're interested in mathematical details and proofs of theorems and historical background. OK. Let's start with i-theory. These are the kind of things that we want, ideally, to explain. This is the visual cortex on the left. Models like HMAX, or feedforward models. And on the right are the deep learning convolutional networks, a couple of them, which basically have convolutional stage stages very similar to S1, and pooling stages similar to C1. But quite a lot of those layers. How many of you know about deep learning? Everybody, right? OK. These are the kind of questions that i-theory tries to answer-- why these hierarchies work well, what is really visual cortex, what is the goal of V1 to IT. We know a lot about simple and complex cells, but again, what is the computational goal of these simple and complex cells? Why do we have Gabor tuning in the early areas? And why do we have quite generic tuning, like in the first visual area, but quite specific tuning to different types of objects like faces and bodies higher up? The main hypothesis with starting i-theory is that one of the main goals of the visual cortex-- it's a hypothesis-- is to compute a set of features, a representation of images, that is invariant to transformations that the organism has experienced-- visual transformations-- and remains selective. Now, why is invariance important? A lot of the problem of recognizing objects is the fact that I can see once Rosalie's face, and then the next time it's the same face, but the image is completely different, for it's much bigger now because I'm closer, or the illumination is different. So the pixels are different. And from one single object, you can produce in this way-- through translation, scaling, different illumination, viewpoint-- you can produce thousands of different images. So the intuition is that if I could get a computer description-- say, long vectors of features of her face-- that does not change under these transformations, recognition would be much easier. Easier means, especially, that I could learn to recognize an object with much fewer labeled examples. Here on the right you have a very simple demonstration of what I mean, empirical demonstration. So we have at the bottom different cars and different planes. And there is a linear classifier which is trained directly on the pixel. Very stupid classifier. And you train it with one car and one plane-- this is on the left-- or two cars, two planes. And then you test on other images. And as you can see, when it's trained with the bottom examples, which are at all kinds of viewpoints and sizes, the performance of the classifier in answering is this a car or is this a plane, it's 50%. It's chance. Does not learn at all. On the other hand, suppose I have an oracle which is-- I will conjecture is visual cortex, essentially, that gives you the feature vectors for each image, which is invariant to these transformations. So it's like having images of cars in this line B. They're all in the same position, same illumination, and so on, and the same for the planes. And I repeat this experiment. I use one pair-- one car, one plane-- to train, or two cars, two planes, and I see immediately that when tested on new images, this classifier is close to 90%. So much better. So correcting-- having invariant representation can help a lot. That's the empirical, simple demonstration. And you can prove theorems saying the same thing, that if you have an invariant representation, you can have a much lower simple complexity, which means you need much fewer labeled examples to train a classifier to achieve a certain level of accuracy. So how can you compute an invariant representation? There are many ways to do it. But I'll describe to you one which I think is attractive, because it's neurophysiologically very plausible. The basic assumption I'm making here is that neurons are very slow devices. They don't do well a lot of things. One of the things they do probably best is high-dimensional dot products. And the reason is that you have a dendritic tree, and in cortical neurons you have between 1,000 and 10,000 synapses. So you have between 1,000 and 10,000 inputs. And each input gets essentially multiplied by the weight of the synapse, which can be changed during learning. It's plastic. And then the post-synaptic depolarization or hyperpolarization, so the electrical changes to the synapses, get all summated in the soma. So you have some i. Xi are your inputs, Wi are your synapses. That's a dot product. And this happens automatically, within a millisecond. So it's one of the few things that neurons do well. It's, I think, one of the distinctive features of neurons of the brain relative to our electronic components, that in each neuron, each unit in the brain, there are about 10,000 wires getting in or out. When I say in, transistor or logical units in our computers, the number of wires is more like three or four. So this is the assumption, that this kind of dot products are easy to do. And so this suggests this kind of algorithm for computing invariance. Suppose you are a baby in the cradle. You're playing with a toy-- it's a bike-- and you are rotating it, for instance. For simplicity. We'll do more complex things. The unsupervised learning that you need to do at this point is just to store the movie of what happens to your toy. For instance, suppose you get a perfect rotation. This is a movie up there. There are eight frames. Yeah. You store those, and you keep them forever. All right. So when you see a new image, it could be Rosalie's face, or this fish. And I want to compute a feature vectors which is invariant to rotation, even if I've never seen the fish rotated. What I do is, I compute a dot product of the image of the fish with each one of the frames. So I get eight numbers. And the claim is that these eight numbers-- not their order, but the numbers-- are invariant to rotation of the fish. So if I see the fish now in a different rotation angle-- suppose it's vertical, I'd still get the same eight numbers. In a different order, probably. You could have-- these are eight numbers. What I said, they are invariant to rotation of the fish. There are various quantities that you can use to represent compactly the fact that they are the same independent of rotation. For instance, the probability distribution-- the histogram-- of these values does not depend on the order. And so if you make a histogram, these should be independent of rotation, invariant to rotation, Or moments of the histogram, like the average, the variance, the moment of order infinity. And for instance, the equation for computing a histogram is written there. You have the dot product of the image, the fish, with one template to the bike, the bike Tk. You have several templates, not just one. And Gi is the element of the rotation group. So you get various rotations of-- simply because you have observed that. You don't need to know its rotation group. You don't need to compute that. These are just images that you have stored. And there can be different thresholds of simple cells. And sigma could be just a threshold function, for instance. As it turns out-- I'll describe later. And sum is the pool. I'll describe later these. But sigma, the nonlinearities can be, in fact, almost anything. This is very robust to different choices of the nonlinearity and the pooling. Here are some examples in which now the transformation is translation that you have observed for the bike. And if I compute a histogram-- from more than eight frames, in this case-- I get the red histogram for the fish, and you can see the red histogram does not change, even if the image of the fish is translated. Same for the blue Instagram, which is the set to features corresponding to the cat. Also it's invariant to translation. But it's different from the red one. So these quantities, the histograms, can be invariant of course, but also selective, which is what you want. In order to have a selectivity as high as you want, you need more than one template. And some results about how many you need. I can go into more details of this. But essentially, you need a number of templates-- of templates like the bike, in your original example-- that is logarithmic in the number of images you want to separate. For instance, suppose you want to be able to distinguish 1,000 faces, or 1,000 objects. Then the number of templates you need is in the order of log 1,000. So does not increase so much. Yeah. So there are two things, one, which you implied. The reason I spoke about rotation of the image plane, because rotation is a compact group. So you never get out. You come back in. The translation, you can-- in principle, mathematically, between plus infinity or minus infinity. Of course it does not make sense, but mathematically this means that it's a little bit more difficult to prove the same results in the case of translation and scale. But we can do it. That's the first point. The second one, the combinatorics of different transformations. Turns out that-- one approach to this is to have what the visual system seems to have, in which you have relatively small ranges of invariance at different stages. So that at first stage, say in V1, you have pooling by the complex cells over a small range of translations, and probably scale. And then at the second stage you have a larger range. I'll come to that later. But it's a very interesting point. I'll not go into this. These are-- technical extension of these partial observer groups, these non-compact groups. The non-group transformation of this approximate invariance to rotations in 3D, or changes of expression, and so on. And then what happens when you have-- a hierarchy of just modules. I'll say briefly something about each one. One is that if you look at the templates that give you simultaneous-- so what we want to do, we want to get scale and positioning invariance. And suppose you want templates that maximize the simultaneous range of invariance to scale and position. It turns out that Gabor templates, Gabor filters, are the ones to do that. So that may be one computational reason for why Gabor filters are a good thing to do in processing images. So for getting approximately good invariance to non-group transformations, you need to have some conditions. The main one is that the template must transform in a similar way to the object you are to compute, like faces. And for these properties to be true for a hierarchy of modules. Think of this inverted triangle like a set of simple cells at the base, and one complex cell, the red circle at the top. And so the architecture that we're looking at is simple complex. This would be like V1. And next to it, another simple complex module. This is all V2. And then you have V1 in the second layer, that is getting the input from V1. And you repeat the same thing, but on the output of V1. This is exactly like a deep learning network. It's like visual cortex, where you have different stages and the effective receptive fields increases as you go up, as you see here. So this would be the increase in spatial pooling-- so invariance-- and also, as I mentioned-- not drawn here, but the scale. Pooling over size, scale. And you can show that, if the following is true, that-- let me see. Is this animated? No. What you need to have-- and a number of different networks, certainly the ones I described, have this property of covariance. So suppose you have an object that translates in the image. OK. What I need is that the neural activity-- the red circles at the first level-- also translate. This is covariance. So what happens is the following. Suppose the object is smaller than those receptive fields, and this drawing is as big. But suppose it's smaller. Then if you translate one of those receptive fields, going from one point to another, because each one has invariance to translations within the receptive field-- it's pooling over them-- translation in the receptive field will give the same output. You will have invariance right there. But suppose you have one image, and then the next one the object moves to a different receptive field, or gets out of the receptive field. Then you don't have invariance at the first layer. But if you have covariance-- or the neural activity moves-- at that layer above, you may have invariance under that receptive field. In other words, in this construction, if you have this covariance property, then at some point in the network, one of these receptive fields will be invariant. Is that-- AUDIENCE: Can you explain that again? TOMASO POGGIO: Yeah. The argument is-- suppose I have an object like this. I have an image. And then-- I have another image in which the object is here. Obviously the response at this level-- the response of this cell will change, because before it saw this object. Now, there is these other cells who see that. So the response has changed. You don't have invariance. However, if you look at what happens, say, at the top red circle there, the top red circle will you see some activity in the first image here, because it was activated for this. And-- in the second case, we see some activity over there, which should be equivalent. And under these receptive fields, translations will give rise to the same signature. Under this big receptive field, you have invariance for translation within it. So the argument is that-- either you have invariance at one layer, because the object just moved within it, and then you are done. It's invariant, and everything else is invariant. Or you don't have invariance in this layer, but you will have it at some layer above. So in a sense-- if you go back to this-- I'll make this point later. But if you go back to this-- to this algorithm, the basic idea is that you want to have invariance to rotation. And so you average over the rotations. But suppose you want to have invariance-- you want to have an estimate of rotation, but you're not interested in identity. Then what you do, you don't pool over rotation. You pull over different objects at one rotation. So you can do both. All right? AUDIENCE: My question was more physiological than theoretical. TOMASO POGGIO: Yeah. Physiological-- we had done experiments long ago in IT with Jim DiCarlo, Gabriel Kreiman. And from the same population of neurons, we could read out identity, object identity, invariant to scale and position. And we could also read out position invariant to identity. And-- AUDIENCE: The same from the-- TOMASO POGGIO: Same population. I'm not saying the same neuron, but the same population of 200 neurons. And so you can imagine that you could have different situations. One could be some of the neurons are only conveying position, and some others are completely invariant. And when you read out with a classifier, it will work. Or you have neurons that are already combining this information, because the channels-- either way. OK, let me do this, and then we can take a break. I want to make the connection with simple and complex cells. We already mentioned this, but this set of operations, you can think of this sigma dot product, n delta, this is a simple cell. So this is a dot product of the image with a receptive field of the simple cell. That's what this parenthesis is. You have a bias, or a threshold, and the nonlinearity. Could be the spiking nonlinearity. Could be, as I said, a rectifier. Neurons don't generate negative spikes. And so all of this is very plausible biologically. And the simple cell will simply pool, take the over the different simple cells. So that's what I mentioned before, that nonlinearity can be almost anything. And I want to mention something that could be interesting for physiology. From the point of view of this algorithm, this may be a solution to this problem that has been around for 30 years or so, which is that Hubel and Wiesel and other physiologists after them identified simple and complex cells in terms of their physiological properties. They couldn't see from where they are recording. But there were cells that behaved in different ways. The simple cells had the small receptive field. The complex cell had larger receptive field. The complex cells were more invariant. And then physiologists today are using criteria in which the complex cell is more non-linear than the simple cell. Now, from the point of view of the theory, the real difference is one is doing the pooling-- the complex cells. The simple cell is not. And the puzzle is that despite these physiological difference, they were never able to say this type of pyramidal cell is simple, and this type of pyramid cell are complex. And part of the reason could be that maybe simple and complex cells are the same cell. So that the operation can be done on the same cell. If you look at the theory, what may happen is that you have one dendrite play the roll of a simple cell. You have inputs, synaptic weights. So this could give rise, for instance, to the Gabor-like receptive field. And then-- these other dendrites to another simple cell. It's a Gabor-like in a slightly different position in the image plane, in the retina. You need the nonlinearities. And they may be, instead of the output of the cell, they may be so-called voltage and time dependent conductancies in the dendrites. In the meantime, we know that pyramidal cells in the visual cortex have these nonlinearities like almost having spike generation in the dendrites. And then the soma will summate everything. This is what the complex cell is doing. And if one of the cells is computing something like an average, which is one of the moments of a distribution, then the nonlinearity will not even be needed. And then physiologists, using the criteria they use this day, would classify that cell as simple, even if from that point of view of the theory it's still complex. Anyway, that's the proposed machinery that comes from the theory. That's everything that we need. And it will say simple and complex cell could be one cell. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_53_Patrick_Winston_Story_Understanding.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PATRICK HENRY WINSTON: Oh, how to start. I have been on the Navy Science Board for a couple of decades. And so as a consequence, I've had an opportunity to spend two weeks for a couple of decades in San Diego. And the first thing I do when I get to San Diego is I go to the zoo, and I look at the orangutans and ask myself, how come I'm out here and they're in there? How come you're not all covered with orange hair instead of hardly any hair at all? Well, my answer to that is that we can tell stories and they can't. So this is the center of my talk. That's what I have come to believe after a long time. So I'm going to be talking about stories and to give you a preview of where I'm going to go with this, I like to just show you some of the questions that I propose to address in the course of the next hour. I want to start talking about this by way of history. I've been in artificial intelligence almost since the beginning, and so I was a student of Marvin Minsky, and it's interesting that the field started a long time ago-- 55 years ago-- with perhaps the most important paper in the field, Steps Toward Artificial Intelligence. It was about that time that the first intelligent program was written. It was a program that could do calculus problems as well as an MIT freshman-- a very good MIT freshman. That was done in 1961. And those programs led to a half century of enormous progress in the field. All sorts of ideas and subfields like machine learning were spawned. Useful applications. But as you know, there's been an explosion in these useful applications in recent years. I've stolen this slide from Tomaso but I added a couple of things that I particularly think are of special importance like Echo. You all know about Amazon Echo, of course, right? I swear, it astonishes me. Many people don't know about Amazon Echo. It's Siri in a beer can. It's a Siri that you can talk to across the room and say how long should I boil a hard boiled egg or I've fallen down. Please call for an ambulance. So it's a wonderful thing. I think most people don't know about it because of privacy concerns. Having something listening to you all the time in your home may not be the most comfortable kind of thing. But anyhow, it's there. So this has caused quite a lot of interest in the field lately. Boris has just talked about Siri and Jeopardy, but maybe the thing that's astonished the world the most is this captioning program I think you've probably seen before the last week or two. And man, I don't know what to think of this. I don't know why it's created such a stir, except that that caption makes it look like that system knows a lot that it doesn't actually know. First of all, it's trained on still photos. So it doesn't know about motion. It doesn't know about play. It doesn't know what it means to be young. It would probably say the same thing if we replaced those faces with geriatrics. Yet when we see that, we presume that it knows a lot. It's sort of parasitic semantics. The intelligence is coming from our interpretation of the sentence, not from it producing the sentence. So yeah, it's a great engineering achievement, but it's not as smart as it looks. And of course, as you know, there's been a whole industry of fooling papers. Here's a fooling example. One of those is considered by a standard deep neural net to be a school bus, and the other is not considered to be a school bus. And just a few pixels have been changed. Imperceptible to us. It still looks like a school bus but not to the deep neural net. And of course, these things are considered school buses in another fooling paper. Why in the world would those be considered school buses? Presumably because of some sharing of texture. Well, anyhow, people have gotten all excited and especially Elon Musk has gotten all excited. We are summoning the demon. So Musk said that in an off-the-cuff answer to a question at the, let's see, an anniversary of the MIT AeroAstro Department. Someone asked him if he was interested in AI, and that's his off-the-cuff response. So it's interesting that those of us who've been around for a while find this beyond interesting. It's curious, because a long time ago philosophers like Hubert Dreyfus were saying that it was not possible. And so now we've shifted from not possible to the scariest thing imaginable. We can't stop ourselves from chuckling. Well, Dreyfus must know that he's wrong, but maybe he's right too. What we really had to wait for for all of these achievements was massive amounts of computing. So I'd like to go a little further back in history while I'm talking about history, and the next thing back there is 65 years ago Turing's paper on machine intelligence. It's interesting that that paper is widely assumed to be about the Turing test and it isn't. If you look at the actual paper content, what you find is that only a couple of pages were devoted to the test. Most of it was devoted to discussion of arguments for and against the possibility of artificial intelligence. I've read that paper 20 times, because I prescribe it in my course, so I have to read it every year. And every time I read it, I become more convinced that what Turing was talking about is arguments against AI, not about the tests. So why the test? Well, he's a mathematician and a philosopher. And no mathematician or philosopher would write a paper without defining their terms. So he squeezed into some kind of definition. Counterarguments took up a lot of space, but they didn't have to. If Turing had taken Marvin Minsky's course in the last couple of years, he wouldn't have bothered with that test, because Minsky introduced the notion of a suitcase word. That's a word-- he likes that term because what he means by that is that the word is so big, like a big suitcase, you can stuff anything into it. So for me, this has been a great thought because, if you ask me, is Watson intelligent? Is the Jeopardy-playing system intelligent? My immediate response is, sure. It has a kind of intelligence. As Boris points out, it's not every kind of intelligence, and it doesn't think like we do, and there are some kinds of thinking that it doesn't do that we do quite handily. But it's silly to argue about whether it's intelligent or not. It has aspects, some kinds of intelligence. So what Turing really did was establish that this is something that serious people can think about and suggests that there's no reason to believe that it won't happen someday. And he centered that whole paper on these kind of arguments, to the arguments against AI. It's fun to talk about those. Each of them deserves attention. I'll just say a word or two about number four there, Lady Lovelace's objection. She was, as you all know, the sponsor and programmer of Charles Babbage when he attempted to make a computer mechanically. And what she said, she was obviously pestered with the same kind of things that everybody in AI is always pestered with. And at one point, she was reported to have said-- let me put it in my patois. Don't worry about a thing. They can only do what they're programmed to do. And of course, what she should have said is that they can only do what they've been programmed to do and what we've taught them to do and what they've learned how to do on their own. But maybe that wouldn't have had the soothing effect she was looking for. In any event, this is when people started thinking about whether computers could think. But it's not the first time people thought about thinking, that we have to go back 2,400 years or so to get to that. And when we do, we think about Plato's most famous work, The Republic, which was clearly a metaphor to what goes on in our brains and our minds and our thinking. He couched it in terms of a metaphor with how a state is organized with philosopher kings and merchants and soldiers and stuff. But he was clearly talking about a kind of theory of brain-mind thinking that suggested there are agents in there that are kind of all working together. But it's important to note, I think, that The Republic is a good translation of the Latin de re publica, which is a bad translation of the Greek politeia. And politeia, interestingly, is a Greek word that my Greek friends tell me is untranslatable. But it means something like a society or community or something like that. And the book was about the mind. So Plato-- it could have been translated as the society of mind, in which case it would have anticipated Marvin Minsky's book with the same title by 2,400 years. Well, maybe that was not the first time that humans thought about thinking, but it was an early landmark. And now it's, I think, useful to go back to when humans started thinking. And that takes us back about 50,000 years-- not millions of years-- a few tens of thousands of years. So it probably happened in southern Africa. It was probably 60,000 or 70,000 years ago that it started happening. It probably happened in the neck-down population, because if the population is too big, an innovation can't take hold. It was about that time that people-- us, we-- started drilling holes in sea shells, presumably for making jewelry. And then it wasn't long after that that we departed from those Neanderthal guys in a big way. And we started painting caves like the ones at Lascaux, carving figurines like the one at Brassempouy. And I think the most important question we can ask in AI is what makes us different from that Neanderthal who couldn't do these things? See, that's not the question that Turing asked. The question Turing asked was how can we make a computer reason? Because as a mathematician, he thought that was a kind of supreme capability of human thinking. And so for 20, 30 years, AI people focused on reasoning as the center of AI. And what they should have been asking is, what makes us different from the Neanderthals and chimpanzees and other species? It creates a different research agenda. Well, I'm much influenced by the paleoanthropologist, Ian Tattersall, who writes extensively about this and says that it didn't evolve, it was more of a discovery than an evolution. Our brains came to be what they are for reasons other than human intelligence. So he thinks of it as a minor change or even a discovery of something we didn't know we had. In any event, he talks about becoming symbolic. But of course, as a paleoanthropologist and not a computationalist, he doesn't have the vocabulary for talking about that in computational terms. So you have to go to someone like Noam Chomsky to get a more computational perspective on this. And what Chomsky says is that-- who is also, by the way, a fan of Tattersall-- what Chomsky says is that what happened is that we acquired the ability to take two concepts and put them together and make a third concept and do that without limit. An AI person would say, oh, Chomsky's talking about semantic nets. A linguist would say he's talking about the merge operation. But it's the same thing. As an aside, I'll tell you that a very important book will come out in January, I think, by Berwick and Chomsky and addresses two questions-- why do we have any language and why do we have more than one? You know, when you think about it, it's weird. Why should we have a language and now that we have one, why should we all have different ones? And their answer is roughly that this innovation made language possible. But once you've got the competence, it can manifest itself in many engineering solutions. They also talk a lot about how we're different from other species, and they like to talk about the fact that we can think about stuff that isn't there. So we can think about apples even when we're not looking at an apple. But back to the main line, when Spelke talked to you, she didn't talk to you about what I consider to be the greatest experiment in developmental psychology ever, even though it wasn't necessarily-- well, I confused myself. Let me tell you about the experiment. Spelke doesn't do rats, but other people do rats with the following observation-- take a rectangular room and there are hiding places in all four corners that are identical. While the rat is watching you put food in one of these places, a box cloth over it, and then you disorient the rat by spinning it around. And you watch what the rat does. And rats are pretty smart. They do the right thing. Those opposite corners are the right answer, right? Because the room is rectangular, those are the two possible places that the food could be. So then you can repeat this experiment with a small child, you get the same answer, or with a intelligent adult like me, and you get the same answer because it's the right answer. But now the next thing is you paint one wall blue and repeat the experiment. What do you think the rat does? Both ways. You repeat the experiment with a small child, both ways. Repeat the experiment with me. Finally we get it right. What's the difference? And when does it happen? When does this small child become an adult? After elaborate and careful experiments of the kind that Spelke is noted for, she has determined that the onset of this capability arises when the child starts using the words left and right in their own descriptions of the world. They understand left and right before that, but this is when they start using those words. That's when it happens. Now we introduce the notion of verbal shadowing. So I read to you the Declaration of Independence or something else and as I say it, you say it back to me. It's sort of like simultaneous translation, only it's English to English. And now you take an adult human, and even while they're walking into the room, they're doing this verbal shadowing. And what happens in that circumstance? That reduces the adult to the level of a rat. They can't do it. And you say, well, didn't you see the blue wall? And they'll say, yeah, I saw the blue wall but couldn't use it. So Spelke's interpretation of this is that the words have jammed the processor. That's why we don't use our laptops in class, right, because we only have one language processor, and it can be jammed. It's jammed by email. I'm jammed by used car salesmen talking fast. It's easy to jam it. And when we jam it, it can't do what you would think it could do. So Spelke has an interpretation to this that says what we humans have is combinators, the ability to take formation of different kinds and put it together. I have a different interpretation, which I'll tell you about at the end if you ask me why I think Spelke has it-- why I have a different interpretation from Spelke for this experiment. And then what we've got is we've got the ability to build descriptions that seem to be the defining characteristic of human intelligence. We've got it in the case at Lascaux. We've got it in the thoughts of Chomsky. We've got it in these experiments of Spelke. We've got descriptions. And so those are the influences that led me to this thing I call the strong story hypothesis. And when you think about it, almost all of education is about stories. You know, you start with fairy tales that keep you from running away in the mall. You'll be eaten by big bad wolf if you do. And you end up with all these professional schools that people go to-- law, business, medicine, and even engineering. You might say, well, engineering, that's not really-- is that case studies? And the answer is, if you talk to somebody that knows what they're doing, what they do is very often telling a story. My friend, Gerry Sussman, a computer scientist whose work I use, is fond of teaching circuit theory as a hobby. And when you hear him talk about this circuit, he talks about a signal coming in from the left and migrating through that capacitor and going into the base of a transistor and causing a voltage drop across the emitter, which creates a current that flows into the collector, and that causes-- he's just basically telling the story of how that signal flows through the network. It's storytelling. So if you believe that, then these are the steps that were prescribed by Marr and Tomaso as well in the early days of their presence at MIT. These are the things you need to do if you believe that and want to do something about it. And these steps were articulated at the time, in part because people in artificial intelligence were, in Marr's words, too mechanistic. I talked about this on the first day, that people would fall in love with a particular mechanism-- a hammer-- and try to use it for everything instead of understanding what the problem is before you select the tools to bring to bear on producing a solution. And so being an engineer, one of the steps here that I'm particularly fond of, once you've got the articulated behavior 100%, eventually you have to build something. Because as an engineer, I think I don't really understand it unless I can build it. And then building it, things emerge that I wouldn't have thought about if I hadn't tried to build it. Well, anyhow, let's see. Step one, characterize the behavior. The behavior has to story understanding. So I'm going to need some stories. And so I tend to work with short summaries of Shakespearean plays, medical cases, cyber warfare, classical social studies, and psychology. And these stories are written by us so as to get through Boris's parser. So they are carefully prepared. But they're human readable. We're not encoding this stuff up. This is the sort of thing you could read, and if you read that you say, yeah, this is kind of funny, but you can understand it. Summary of Macbeth. Here is a little fragment of it, and it is easier to read. So what do we want to do? What can we bring to bear on understanding the story? If you read that story, you'd see-- I could ask you a question, is Duncan dead at the end? And how would you know? It doesn't say he's dead at the end. He was murdered, but-- I could ask you is this a story about revenge? The word revenge is never mentioned, and you have to think about it a little bit. But you'd probably conclude in the end that it's about revenge. So now we ask ourselves what kinds of knowledge is required to know that Duncan is dead at the end and that it's about revenge? Well, first of all, we need some common sense. And what we've found, somewhat to our surprise, is that much of that can be expressed in terms of simple if-then rules. And these seven rule types arose because people building software to understand stories found that they were necessary. We knew we needed the first kind. If you kill someone, then they are dead. Every other one here arose because we reached an impasse in our construction of our story understanding system. So the may rules. If I anger you, you may kill me. Thank god we don't always kill people who anger us. But we humans always are searching for explanations. So if you kill me, and I've previously angered you, and you can't think of any other reason for why the killing took place, then the anger is supposed. So that's the explanation for rule type number two. Sometimes we use abduction. You might have a firm belief that anybody who kills somebody is crazy. That's abduction. You're presuming the antecedent from the presence of a consequent. So those are kinds of rules that work in the background to deal with the story. And of course, there are things that are explicit in the story too. Here are some examples of things that are-- of causal relations that are explicit in the story. The first kind says that this happens because that happened. A close, tight causal connection. Second kind, we know there's a causal connection, but it might be lengthy. The third kind, the strangely kind, arose when one of my students was working on Crow creation myths. He happened to be a Crow Indian and so a natural interest in that mythology. And what he noted was that in Crow mythology, you're often told that something is connected causally and also told that you'll never understand it. Old Man Coyote reached into the lake and pulled up a handful of mud and made the world, and you will never understand how that happened, is a kind of typical expression in Crow creation mythology. So all of these arose because we were trying to understand particular kinds of stories. OK. So that's all background. Here it is in operation. And that's about the speed that it goes. That is reading the story-- the summary of Macbeth that you saw on the screen a few moments ago. But of course, it's invisible at that scale. So let me blow a piece of it up. There you see the piece that says, oh, Macbeth murdered Duncan. Duncan becomes dead. So the yellow parts are inserted by background knowledge. The white parts are explicit in the story. So you may have also noted-- no, you wouldn't have noted-- well, my drawing attention to it. We have not only the concept that the yellow parts-- yes, the yellow parts there are conclusions. The white parts are explicit. And what you can see, incidentally, is that-- just from the colors-- that much of the understanding of the story is inferred from what is there. It's a funny kind of way of saying it, but you've seen in computer vision or you will see in computer vision that what you think you see is half hallucination, and what you think you see in the story is also half hallucinated. It seems that the authors just tell us enough to keep us on track. In any event, we have not only the yellow parts that are inferred, but we also have the observation that one piece may be connected to another. And that can only be determined by doing a search through that so-called elaboration graph. So here are the same sorts of things that you can search for. There's a definition of a Pyrrhic victory. You do something or rather you want something at least you becoming happy, but ultimately the same wanting leads to disaster. So there it is. That green thing down there is reflected in the green elements up there that are picked out of the entire graph because they're connected. And I'll show you how they're connected in this one. So this is the Pyrrhic victory concept that has been extracted from the story by a search program. So we start with Macbeth wanting to be king. He murders Duncan because of that. He becomes happy, because he eventually ends up being king himself, but downstream he's harmed. So it's a kind of Pyrrhic victory. And now you say to me, well, that's not my definition of Pyrrhic victory, and that's OK, because we all have nuanced differences in our concepts. So this is just one computer's idea of what a Pyrrhic victory is. Here are the kinds of things we've been able to do as a consequence of, to our surprise, just having a suite of rules and a suite of concepts. And what I'm going to spend the next few minutes doing is just taking you quickly through a few examples of these kinds of things. This is reading Macbeth from two different cultural points of view, an Asian point of view and a US point of view. There were some fabulous experiments conducted in the '90s in a high school in the outskirts of Beijing and in a high school in Wisconsin. And these experiments involved having the students read stories about violence and observing the reaction of the students to those stories. And what they found was that at a statistically significant level, not this or that, but a statistically significant level, the Asian students outside of Beijing attributed violence to situations. And they would ask, what made that person want to do that? Whereas the kids in Wisconsin had a greater tendency to say, that person must be completely crazy. So one attributed to the situation, the other was dispositional, to use the technical term. So here we see in one reading of Macbeth-- let me show it blown up. In the top version, Macduff kills Macbeth because there's a revenge situation that forces it. And the other interpretation is because Macduff is crazy. So another kind of similar pairing-- oh, wow. That was fast. This is a story about the Estonian Russian cyber war of 2007. Could you-- you probably didn't hear the rules of engagement, but I don't talk to the back of laptops. So if you'd put that away, I'd appreciate it. So in 2007, the Estonians moved a war memorial from the Soviet era out of the center of town to a cemetery in the outskirts. And about 30% of the Estonian population is Russian, and they were irritated by this. And it had never been proven, but the next day the Estonian National Network went down, and government websites were defaced. And this was hurtful, because the Estonians pride themselves in being very technically advanced. In Estonia, it's a right to be educated on how to use the internet. They have a national ID card. They have a law that says if anybody looks at your data, they've got to explain why. They're a very technically sophisticated country. And so what's the interpretation of this attack, which was presumed to be done by either ethnic Russians in Estonia or by people from Russia? Well, was it an aggressive revenge or was it teaching the Estonians a lesson? It depends on what? It depends on whose side you're on. That's the only difference. And that's what produced the difference in interpretation on those two sides-- one being aggressive revenge and the other being teaching the Estonians a lesson. By the way, I was in Estonia in January. That's the statue that wasn't there. Give you another example. I'm just trying to show you some of the breadth of our story understanding activity. So the next example comes about because shortly after the terrorist attack on the World Trade Center, there was a strong interest in bringing political science and artificial intelligence together to make it possible to understand how other people think when they're not necessarily crazy, they've just got different backgrounds. The thesis is that we are the stories in our culture. So I was at a meeting in Washington, and the only thing I remember from that meeting is one of the participants drew a parallel between the Tet Offensive in Vietnam and the Arab-Israeli war that took place about six or seven years later. And here's the story. OK. And here's what happened seven years later. What do you suppose happened next? And of course, the answer is quite clear. And when we feel like talking about the long-range eventual practical uses of the stuff we're talking about, this is the kind of thing we say. What we want to do is we want to build for political analysts tools that would be as important to them as spreadsheets are to a financial analyst, tools that can enable to predict or expect or understand unintended consequence of actions you might perform. So this a gap and alignment problem. And here is one case in which we have departed from modeling humans. And we did it because one of our students was a refugee from bioengineering. And he knew a lot about aligning proteins, sequences, and DNA sequences. And so he brought to our group the Needleman-Wunsch algorithm for doing alignment. And we used that to align those stories. So there they are with the gaps in them. We took those two stories I showed you on a previous slide. We put a couple of gaps in. The Needleman-Wunsch algorithm aligned them, and then we were able to fill in the gaps using one to fill in the gap in the other. And since you can't see it, here's what it would have filled in. So that's an example of how we can use precedence to think about what happens next or what happened in the missing piece or what led to this. It's a kind of analogical reasoning. My next example is putting the system in teaching mode. We have a system. We have a student. We want to teach the student something. Maybe the student is from Mars and doesn't know anything. So this is an example of how the Genesis system can watch another version of itself, not understand the story and supply the missing knowledge. So this is a hint at how it might be used in an educational context. And once you can have a model of the listener, then you can also think about how you can shape the story so as to make some aspect of it more or less believable. So you notice I'm carefully avoiding the word propaganda, which puts a pejorative spin on it. But if you're just trying to teach somebody values, that's another way of thinking about it. So this is the Hansel and Gretel story. And the system has been ordered to make the woodcutter be likeable, because he does some good things and bad things. So when we do that, you'll note that there are some things that are struck out, the stuff in red, and some things that are marked in green for emphasis. And let me blow those up so you can see them. So the stuff that the woodcutter does that's good are highlighted and bolded, and the things that we don't want to say about the woodcutter because it makes him look bad, we strike those out. And of course, another way of making somebody-- what's another way of making somebody look good? Make everybody else look bad, right? So we can flip a switch and have him make comments about the witch too so that the woodcutter looks even better because the bad behavior of the witch is highlighted. So these are just some examples of the kinds of things we can do. Here's another one. This is that Macbeth story played out in-- it's about 180 80 or 100 sentences, and we can summarize it. So we can use our understanding of the story to trim away all the stuff that is not particularly important. So what is not particularly important? Anything that's not connected to something else is not particularly important. Think about it this way. The only reason you read a story-- if it's not just for fun-- the only reason you read a story-- a case study-- is because you think it'll be useful later. And it's only useful later if it exerts constraint. And it only exerts constraint if there are connections-- causal connections in this case. So we take all the stuff that's not connected, and we get rid of it. Then we get rid of anything that doesn't lead to a central concept. So in this case, we say that the thing is about Pyrrhic victory. That's the central concept. We get rid of everything that doesn't bear on that concept pattern instantiation. And then we can squeeze this thing down to about 20% of its original size. And now I come to the thing that we were talking about before, and that is how do you find the right precedent? Well, if you do a Google search, it's mostly-- well, they're getting more and more sophisticated, but most searches are mostly keywords. But now we've got something better than key words. We've got concepts. So what I'm going to show you now is a portrayal of an information retrieval test case that we did with 14 or 15 conflict stories. We're interested in how close they were together. Because the closer they are together, the more one is likely to be a useful precedent for another. So in one of these matrices, what you see is how close they are when viewed from the point of view of key words. That's the one on the bottom. The one on the top is how close they are with respect to the concepts that they contain, you know, the words like revenge, attack, and not present in the story as words, but are present there anyway as concepts. And the only point of this pairing is to show that the consideration of similarity is different depending on whether you're thinking in terms of concepts or thinking in terms of words. So here's a story. A young man went to work for a company. His boss was pretty mean. Wouldn't let him go to conferences. One day somebody else in the company arranged for him to go to a conference anyhow. Provided transportation. He went to the conference, and he met some interesting people. But unfortunately, circumstances were that he had to leave early to catch a flight back home. And then some of the people he met at the conference started looking for him because he was so-- so what story am I telling? It's pretty obviously a Cinderella story, right? But there's no pumpkin. There's no fairy godmother. It's just that even though the agents are very different in terms of their descriptions, the relationships between them are pretty much the same. So over the years, what we've done quite without intending it or expecting it or realizing it is that we have duplicated in Genesis the kinds of thinking that Marvin Minsky talks a lot about in his most recent book, The Emotion Machine. He likes to talk in terms of multiplicities. We have multiple ways of thinking. We have multiple representations. And those kinds of reasoning occur on multiple levels, from instinctive reactions at the bottom to self-conscious reflection on the top. So quite without our intending it, when we thought about it one day by accident, we had this epiphany that we've been working to implement much of what is in that book. So so far, and I'm going to depart from story understanding a little bit to talk to you about some other hypotheses of mine. So far, there are two-- the strong story hypothesis and then there's this inner language hypothesis that Chomsky likes to talk a lot about. We have an inner language, and our inner language came before our outer language. And this is what makes it possible to think. So those are two hypotheses-- inner language and strong story. Here's another one-- it's important that we're social animals, and it's actually important that we talk to each other. Once Danny Hillis, a famous guy, a graduate of ours, came into my office and said, have you ever had the experience of-- well, you often talked to Marvin. Yeah, I do. And have you ever had the experience, he said, of having Marvin guess? He has a very short attention span, Marvin, and he'll often guess your idea before you've fully explained it? Yes, I said. It happens all the time. Isn't it the case, Danny said, that the idea that he guesses you have is better than the idea you're actually trying to tell him about? Yes. And then he pointed out, well, maybe when we talk to ourself, it's doing the same kind of thing. It's accessing ideas and putting them together in ways that wouldn't be possible if we weren't talking. So it often happens that ideas come about when we talk to each other, because it forces the rendering of our thoughts and language. And if we don't have a friend or don't happen to have anybody around we can talk to-- I feel like I talk to myself all the time. And maybe that's an important consequence or important aspect of our intelligence is conversation that we carry on with ourself. Be careful doing this out loud. Some people will think you've got a screw loose. But let me let me show you an experiment I consider to be extraordinarily interesting along these lines. It was done by a friend of mine at the University Pittsburgh Michelin. Mickey, as she is called, was working with students on physics problems. You've all done this kind of problem, and it's about pulleys and weights and forces and stuff. And so these students were learning the subject, and so she gave them a quiz, and she had them talk out loud as they were working on the quiz. And she kept track of how many things the best students said to themselves and how many things the worst students said to themselves. So in this particular experiment, there weren't many students. I think eight-- four good ones and four bad ones. And the good ones scored twice as high as the bad ones, and here's the data. The better students said about 3 and 1/2 times more stuff to themselves than the other ones. So unfortunately, this is backwards. We don't know if we took the bad students and encouraged them to talk more, if they'd become smarter. So we're not saying that. But it is interesting observation that the ones who talked to themselves more were actually better at it. And what they were saying was a mixture of problem-solving things and physics things. Like I'm stuck or maybe I should try that again or physics things like I think I have to do a force diagram. A mixture of those kinds of things. So talking seems to surface a kind of capability that not every animal has. And then there's this one. So it isn't just that we have a perceptual apparatus, it's that we can direct it to do stuff in our behalf. That's what I think is part of the magic. So my standard examples-- John kissed Mary. Did John touch Mary? Everybody knows that the answer is yes. How do you know it? Because you imagine it and you see it. So there's a lot of talk in AI about how you gather common-sense knowledge and how you can only know a limited number of facts. I think it's all screwy, because I think a lot of our common sense comes just in time by the engagement of our perceptual apparatus. So there's John kissing Mary, and now I want to give you another puzzle. And the next puzzle is how many countries in Africa does the equator go through? Does anybody know? I've asked students who come to MIT from Africa, and they don't know. And some of them come from countries that are on the equator and they don't know. But now you know. And what's happened? Your eyes scan across that red line, and you count. Shimon Ullman would call it a visual routine. So you're forming-- you're creating a little program. Your language system is demanding that your visual system run a little program that scans across and counts. And your vision system reports back the answer. And that I think is a miracle. So one more example. It's a little grizzly. I hope you don't mind. So a couple of years ago, I installed a table saw. I like to-- I'm an engineer. I like to build stuff. I like to make stuff. And I had a friend of mine who's a cabinetmaker-- a good cabinetmaker-- helped me to install the saw. And he said, you must never wear gloves when you operate this tool. And I said well-- and before I got the first word out, I knew why. No one had ever told me. I had never witnessed an experience that would suggest that should be a rule. But I imagined it. Can you imagine it? What I need you to imagine is that you're wearing the kind of fluffy cotton gloves. Got it now? And the fluffy cotton glove gets caught in the blade. And now you know why you would never-- I don't think any of you would ever use gloves when you operate a table saw now, because you can imagine the grisly result. So it's not just our perceptual apparatus. It's our ability to deploy our perceptual apparatus, and our imagination, I think, is a great miracle. That vision is still hard, as everyone in vision will tell you. Some years ago, I was involved in a DARPA program that had as its objective recognizing 48 activities, 47 of which can be performed by humans. One of them is fly. So that doesn't count, I guess. After a couple of years into the program, they retrenched to 17. At the end of the program, they said if you could do six reliably, you'll be a hero. And my team caught everyone's attention by saying we wouldn't recognize any actions that would distract people. So vision is very hard. And then stories do, of course, come together with perception, right? At some point, you've doubtlessly in the course of the last two weeks seen this example. What am I doing? What am I doing? AUDIENCE: Drinking. PATRICK HENRY WINSTON: And then there's Ullman's cat. What's it doing? So my interpretation of this is that that cat and I are-- it's the same story. You can imagine that there's thirst, that there are activities that lead to water or liquid passing into the mouth. So we give them the same label, even though visually they're as different as anything could be. You would never get a deep neural net program that's been trained on me to recognize that that's a cat drinking. There's visually nothing similar at all. All right. But we might ask this question-- can we have an intelligent machine without a perceptual system? You know, that Genesis system with Macbeth and all that? Is it really intelligent when it doesn't have any perceptual apparatus at all? It can't see anything. It doesn't know what it feels like to be stabbed. I think it's an interesting philosophical question. And I'm a little agnostic on this right now, a little more agnostic than I was half a year ago, because I went back to the republic. You remember that metaphor of the cave and the republic? There's a metaphor of the cave. You have some prisoners, and they're chained in this cave, and there's a fire somewhere. And all they can see is their own shadows against the wall. That's all they've got for reality. And so their reality is extremely limited. So they're intelligent but they're limited. So I think, generalizing that metaphor, I think a machine without a perceptual system has extraordinarily limited reality. And maybe we need a little bit of perception to have any kind-- to have what we would be comfortable to calling intelligence. But we don't need much. And another way of thinking about it is, sort of the fact that our own reality is limited. If you compare our visual system to that of a bee, they have a much broader spectrum of wavelengths that they can make use of, because they don't have-- do you know why? We are limited because we have water in our eyeballs. And so some of that stuff in the far ultraviolet can't get through. But bees can see it. And then that's comparing us humans with bees. How about comparing one human against another? At this distance, I can hardly see it. Can you see it? AUDIENCE: Boat. PATRICK HENRY WINSTON: It's a boat, but some people can't see it, because they're colorblind. So for them, there's a slight impairment of reality, just like a computer without a perceptual system would have a big impairment of reality. So it's been said many times that what we're doing here is we're trying to understand the science side of things. And we think that that will lead to engineering advances. And people often ask this, and those who don't believe that human intelligence is relevant will say, well, airplanes don't fly like birds. What do you think of that argument? I think it has a gigantic hole, and it turns out that the Wright brothers were extremely interested in birds, because they knew that birds could tell them something about the secrets of aerodynamics. And all flying machines have to deal with the secrets of that kind of physics. So we study humans, not because we're going to build a machine that has neurons in it, but because we want to understand the computational imperatives that human intelligence can shed light on. That's why we think it has engineering value, even though we won't in any likely future be building computers with synapses of the kind we have. Well, now we come to the dangers that we started out with. What do you think we should-- suppose that machines can become-- they do become really smart, and we've got machine learning. What is that? That's modern statistics. And of course, it's useful. What if they became really smart in the same ways that we're smart? What would we want to do to protect ourselves? Well, for this, I'd like to introduce the subject by asking you to read the following story. This was part of that Morrison paying a suite of experiments. I'm sorry these are so full of violence. This happens to be what they worked with. So after the students read this story, they were asked did Lu kill Sean because America is individualistic? And the Asian students would have a tendency to say yes. So how can we model that in our system? Well, to start out with, this is the original interpretation of the story as told. And if you look at where that arrow is, you see that that's where Lu kills Sean, and just in back of it is the means. He shot him with a gun. But it's not connected to anything. And so the instantaneous response is, we don't know why Lu killed Sean. But what Genesis does at this point is, when asked the question, did Lu kill Sean because America is individualistic? It goes into its own memory and said I am modeling an Asian reader. I believe that America is individualistic. I will insert that into the story. I will examine the consequences of that insertion, and then see what happens. And this is what happens. The question is asked and inserts into the story-- boom, boom, boom. And now Lu kills Sean is connected all the way back to America is individualistic. And so the machine can say yes, but that's not the interesting part. Now, this is what it says, but that's not the interesting part. The interesting part is this-- it describes to itself what it's doing in its own language, which it treats as its story of its own behavior. So it now has the capacity to introspect into what it itself is doing. I think that's pretty cool. It's a kind of-- OK. It's a suitcase word-- self-awareness, consciousness, a big suitcase word, but you can say that this system is aware of its own behavior. By the way, this is one of the things that Turing addressed in his original paper. One of the arguments against AI was-- I forgot what Turing called it-- the disabilities argument. And people were saying computers can never do these kinds of things, one of which is be the subject of its own thought. But Genesis is now reading the story of its own behavior and being the subject of its own thought. OK. So what if they really become smart? Now, I will become a little bit on the whimsical side. Suppose they really get smart. What will we want to do? Maybe we ought to simulate these suckers first before we turn them loose in the world. Do you agree with that? After all, simulation is now a well-developed art. So we can take these machines-- maybe there will be robots. Maybe there will just be programs, and we can do elaborate simulations to make sure that they're not dangerous. And we would want to do that in as natural a world as possible. And we'd want to do these experiments for a long time before we turned them loose to see what kinds of behaviors were to be expected. And you see where I'm going with this? Maybe we're at it. I think it's a pretty interesting possibility. I'm not sure any of you are it, but I know that I might be it. This is a great simulation to see if we're dangerous. And I must say, if we are a simulation to see if we're dangerous, it's not going very well. Key questions revisited. Why has AI made so little progress? Because for too many years, it was about reasoning instead of about what's different. How can we make progress now? By focusing on what it is that makes human intelligence unique. How can a computer be really smart without a perceptual system or can it be? And I think yes, but I'm a little bit agnostic. Should engineers care? Absolutely, because it's not the hardware that we're trying to replicate. It's the understanding of the computational imperatives. What are the dangers and what should we do about them? We need to make any system that we depend on capable of explaining itself. It needs to have a kind of ability to explain what it's doing in our terms. No. Don't just tell me it's a school bus. Tell me why you think it's a school bus. You did this thing. You better be able to defend yourself in something analogous to a court of law. These are the things we need to do. And finally, my final slide is this one. This is just a summary of the things I've tried to cover this morning. And one last thing, I think it was Cato the Elder who said, carthago delenda est. Every speech to the Roman Senate ended with Carthage must be destroyed. He could be talking about the sewer system, and the final words would be Carthage must be destroyed. Well, here is my analog to Carthage must be destroyed. I think this is so, because I think-- well, many people consider artificial intelligence products to be dangerous. I think understanding our own intelligence is essential to the survival of the species. We really do need to understand ourselves better. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_71_Josh_McDermott_Introduction_to_Audition_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOSH MCDERMOTT: I'm going to talk about hearing. I gather this is the first time in this class that, really, people have talked about audition. And usually, when I give talks and lectures, I find it's often helpful to just start out by doing some listening. So we're going to start out this morning by just listening to some examples of typical auditory input that you might encounter during the course of your day. So all you've got to do is just close your eyes and listen. Here we go. [AUDIO PLAYBACK] [INAUDIBLE] [END PLAYBACK] OK. So you guys could probably tell what all of those things were, in case not, here are some labels. So the first one is just a scene I recorded on my iPhone in a cafe. The second one was something from a sports bar. Then there was a radio excerpt, and then some Barry White. And the point is that in all of those cases, I mean, they're all pretty different, but in all those cases, your brain was inferring an awful lot about what was happening in the world from the sound signal that was entering your ears. Right? And so what makes that kind of amazing and remarkable is the sensory input that it was getting, to first order, just looks like this. Right? So there was sound energy that traveled through the air. It was making your ear drums wiggle back and forth in some particular pattern that would be indicated by that waveform there, that plots the pressure at your eardrum as a function of time. And so from that particular funny pattern of wiggles, you were able to infer all those things. So in the case of the cafe scene, you could tell that there were a few people talking. You could probably hear a man and a woman, you could tell there was music in the background. You could hear there was dishes clattering. You might have been able to tell somebody had an Irish accent. You could probably tell, in the sports bar, that people were watching football and talking about it, and there were a bunch of people there. You instantly recognized the radio excerpt as drivetime radio, that sort of, like, standard sound. And in the case of the music excerpt, you could probably hear six or seven, or maybe eight different instruments that were each, in their own way, contributing to the groove. So that's kind of audition at work. And so the task of the brain is to take the sound signal that arrives at your ears that is then transduced into electrical signals by your ears, and then to interpret it and to figure out what's out there in the world. So what you're interested in, really, is not the sound itself. You're interested in whether it was a dog, or a train, or rain, or people singing, or whatever it was. And so the interesting and hard problem that comes along with this is that most of the properties that we are interested in as listeners are not explicit in the waveform, in the sense that if I hand you the sound waveform itself, and you either look at it, or you run sort of standard machine classifiers on it, it would be very difficult to discern the kinds of things that you, with your brain, can very easily just report. And so that's really that's the question that I'm interested in and that our lab studies, namely, how is it that we derive information about the world from sound? And so there's lots of different aspects to the problem of audition. I'm going to give you a taste of a few of them over the course of today's lecture. A big one that lots of people have heard about is often known as the cocktail party problem, which refers to the fact that real world settings often involve concurrent sound. So if you're in a room full of busy people, you might be trying to have a conversation with one of them, but there'll be lots of other people talking, music in the background, and so on, and so forth. And so from that kind of complicated mixed signal that enters your ears, your brain has to estimate the content of one particular source of interest, of the person that you're trying to converse with. So really, what you'd like to be hearing might be this. AUDIO: She argues with her sister. JOSH MCDERMOTT: But what might enter your ear could be this. AUDIO: [INTERPOSING VOICES] JOSH MCDERMOTT: Or maybe even this. [INTERPOSING VOICES] JOSH MCDERMOTT: Or maybe even this. AUDIO: [INTERPOSING VOICES] JOSH MCDERMOTT: So what I've done here plotted next to the icons are spectrograms. That's a way of taking a sound signal and turning that into an image. So it's a plot of the frequency content over time. And you can see with the single utterance up here at the top, there's all kinds of structure, right? And we think that your brain uses that structure to understand what was being said. And you can see that, as more and more people are added to the party, that that structure becomes progressively more and more obscured, until, by the time you get to the bottom, it's kind of amazing that you can kind of pull anything out at all. And yet, as you hopefully heard, your brain has this remarkable ability to attend to and understand the speech signal of interest. And this is an ability that still, to this day, is really unmatched by machines. So present day speech recognition algorithms are getting better by the minute, but this particular problem is still quite a significant challenge. And you've probably encountered this when you try to talk to your iPhone when you're in a car or wherever else. Another kind of interesting complexity in hearing is that sound interacts with the environment on its way to your ears. So, you know, you typically think of yourself as listening to, say, a person talking or to some sound source, but in reality, what's happening is something like this picture here. So there's a speaker in the upper right corner from which sound is emanating, but the sound takes a whole lot of different paths on its way to your ears. There's the direct path, which is shown in green, but then there are all these other paths where it can reflect off of the walls in the room. So the blue lines here indicate paths where there's a single reflection. And the red lines indicate paths where there are two reflections, and so you can see there's a lot of them. And so the consequence of this is that your brain gets all these delayed copies of the source signal. And what that amounts to is really massive distortion of the signal. And this is known as reverberation. So this is dry speech. Of course, you're hearing this in this auditorium that itself has lots of reverberation, so you're not actually going to hear it dry, but you'll still be able to hear a difference. AUDIO: They ate the lemon pie. Father forgot the bread. JOSH MCDERMOTT: And this is that signal with lots of reverberation added, as though you were listening to it in a cathedral or something. Of course, you're, again, hearing it in this room, as well. AUDIO: They ate the lemon pie. Father forgot the bread. JOSH MCDERMOTT: And you can still hear a difference. And if the reverb in this auditorium is swamping that, you can just look at the waveforms. And you can see that the waveforms of those two signals look pretty dramatically different, as do the spectrograms. All right? So the point is that the consequence of all of those delayed reflections massively distorts the signal, all right? Physically, there are two really different things in those two cases. But again, your ability to recognize what's being said is remarkably invariant to the presence of that reverberation. And again, this is an instance where humans really are outperforming machines to a considerable extent. This graph is a little bit dated. This is from, I think, three years ago, but it's a plot of five different speech recognition algorithms. And the percent of errors they're making when given a speech signal is a function of the amount of reverberation. And so zero means that there's no reverberation. That's the dry case, right? And so speech recognition works pretty well without any reverb. But when you add a little bit of reverberation, and this is measured in terms of the reverberation time, it's the amount of time that it takes the reverb to fall off by a certain specified amount, and 300 and 500 milliseconds are actually very, very modest. So in this auditorium, my guess would be that the reverberation time is maybe even a couple seconds. So this is, like, what you get in a small classroom, maybe. Maybe even less than that. But you can see that it causes major problems for speech recognition. And it's because the information in the speech signal gets blurred out over time, and again, it's just massive distortion. So your brain is doing something pretty complicated in order for you to be so robust to the presence of the reverberation. So I run a research group where we study these kinds of problems. It's called the Lab for Computational Addition in the Department of Brain and Cognitive Sciences at MIT. We operate at the intersection of psychology, and neuroscience, and engineering, where what we aspire to do is to understand how it is that people hear so well in computational terms that would allow us to instantiate them in algorithms that we might replicate in machines. And so the research that we try to do involves hopefully symbiotic relationships between experiments in humans, auditory neuroscience, and machine algorithms. And the general approach that we take is to start with what the brain has to work with. And by that, I mean we try to work with representations like the ones that are in the early auditory system. And so here's the plan for this morning. And this is subject to change, depending on what kind of feedback I get from you guys. But my general plan was to start out with an overview of the auditory system, because I gather there's sort of a diversity of backgrounds here, and nobody's talked about audition so far. So I was going to go through a little overview. And then there's been a special request to talk about some texture perception. I gather that there were some earlier lectures on visual texture, and that might be a useful thing to talk about. It's also a nice way to understand auditory models a little bit better. I was then going to talk a little bit about the perception of individual sound sources and sort of the flip side to sound texture, and then conclude with a section on auditory scene analysis, so what your brain is able to do when it gets a complicated sound signal like you would get normally in the world, that has contributions from multiple causes and you have to infer those. OK. And so we'll take a break about halfway through, as I guess that's kind of standard. And I'm happy for people to interrupt and ask questions. OK. So the general outline for hearing, right, is that sound is created when objects in the world vibrate. Usually, this is because something hits something else, or in the case of a biological organism, there is some energy imparted to the vocal cords. And the object vibrates. That vibration gets transmitted to the air molecules around it, and you get a sound wave that travels through the air. And that sound wave then gets measured by the ears. And so the ear is a pretty complicated device that is designed to measure sound. It's typically divided up into three pieces. So there's the outer ear, consisting of the pinna and the eardrum. In functional terms, people usually think about this as a directional microphone. There's the middle ear. They're these three little bones in between the eardrum and the cochlea that are typically ascribed the functions of impedance matching and overload protection. I'm not going to talk about that today. And then there's the inner ear that the cochlea, which in very coarse engineering terms, we think of as doing some kind of frequency analysis. And so again, at kind of a high level, so you've got your ears here. This is the inner ear on each side. And then those send feedforward input to the midbrain, and there's a few different way stations here. The cochlear nucleus is superior olivery complex, the inferior colliculus, and then the mediagenic nucleus or the thalamus. And the thalamus then projects to the auditory cortex. And there's a couple things at a high level that are worth noting here. One is that the pathways here are actually pretty complicated, especially relative to the visual system that you guys have been hearing lots about it. Right? So there's a bunch of different stops on the way to the cortex. Another interesting thing is that input from the two ears gets mixed at a pretty early stage. OK. All right, so let's step back and talk about the cochlea for a moment. And I realize that some of you guys will know about this stuff, so we'll go through it kind of quick. Now one of the signature features of cochlear transduction is that its frequency tunes. So this is an unwrapped version of the cochlea. So if we step back to here, all right-- so we've got the outer ear, the ear canal, the eardrum, those three little bones that I told you about that connect to the cochlea. And the cochlea is this thing that looks like a snail. And then if we unroll that snail and look at it like this, you can see that the cochlea consists of these tubes separated by a membrane. The membrane is called the basilar membrane. That's worth knowing about. So sound enters here at the base and sets up a traveling wave along this membrane. So this is really a mechanical thing that happens. So there's actually, like, a physical vibration that occurs in this membrane. And it's a wave that travels along the cochlea. And one of the signature discoveries about the cochlea is that that traveling wave peaks in different places, depending on the frequency content of the sound. And that's schematized in these drawings here on the right. So if the ear were to receive a high frequency sound, that traveling wave would peak near the base. If it were to receive a medium frequency sound, the wave would peak somewhere in the middle. And a low frequency sound would peak near the apex. And the frequency tuning, it's partly mechanical in origin, so that membrane, it varies in thickness and stiffness along its length. And there's also a contribution that's non-linear and active, that we'll talk briefly about it in a little bit. So this is a close up. This is a cross-section. So imagine you took this diagram and you kind of cut it in the middle. This is a cross-section of the cochlea. So this here is the basilar membrane, and this is the organ of Corti that sits on top of the basilar membrane. And so if you look closely, you can see that there's this thing in here. This is the inner hair cell. And that's the guy that does the transduction that takes the mechanical energy that's coming from the fact that this thing is vibrating up and down, and turns that into an electrical signal that gets sent to your brain. And so the way that that works is that there is this other membrane here called the tectorial membrane. And the hair cells got these cilia that stick out of it. And as it moves up and down, there's a shearing that's created between the two membranes. The hair cell body deforms, and that deformation causes a change in its membrane potential. And that causes neurotransmitter to be released. All right, so that's the mechanism by which the brain takes that mechanical signal and turns it into an electrical signal that gets sent to your brain. The other thing to note here, and we'll return to this, is that there are these other three cells here that are labeled as outer hair cells. And so those kind of do what the inner hair cell does in reverse. So they get an electrical signal from your brain, and that causes the hair cell bodies to deform, and that actually alters the motion of the basilar memory. So it's like a feedback system that we believe serves to amplify sounds and to sharpen their tuning. So there's feedback all the way to the all the way to the cochlea. OK. So this is just another view. So here's the inner hair cell here. As this thing vibrates up and down, there's a shearing between these membranes. The inner hair cell membrane potential changes, and that causes neurotransmitter release. OK, and so here's the really important point. So we just talked about how there's this traveling wave that gets set up that peaks in different places, depending on the frequency content of the sound. And so because to first order only, part of the basilar membrane moves for a given frequency of sound, each hair cell and the auditory nerve fiber that it synopsis with, signals only particular frequencies of sound. And so this is sort of the classic textbook figure that you would see on this, where what's being plotted on the y-axis is the minimum sound intensity needed to elicit a neural response. And the x-axis is the frequency of a tone with which you would be stimulating the ear. So we have a little pure tone generator with a knob that allows you to change the frequency, and another knob that allows you to change the level. And you sit there recording from an auditory nerve fiber, varying the frequency, and then turning the level up and down until you get spikes out of the nerve fiber. And so for every nerve fiber, there will be some frequency called the characteristic frequency, at which you can elicit spikes when you present the sound at a fairly low level. And then as you change the frequency, either higher or lower, the level that is needed to elicit a response grows. And so you can think of this as like a tuning curve for that auditory nerve fiber. All right? And different nerve fibers have different characteristic frequencies. Here is just a picture that shows a handful of them. And so together, collectively, they kind of tile the space. And of course, given what I just told you, you can probably guess that each of these nerve fibers would synapse to a different location along the cochlea. The ones that have high characteristic frequencies would be near the base. The ones that have low characteristic frequencies would be near the apex. OK. So in computational terms, the common way to think about this is to approximate auditory nerve fibers with bandpass filters. And so this would be the way that you would do this in a model. Each of these curves is a bandpass filter, so what you see on the y-axis is the response of the filter. The x-axis is frequency. So each filter has some particular frequency at which they give a peak response, and then the signal is attenuated on either side of that peak frequency. And so one way to think about what the cochlea is doing to the signal is that it's taking the signal that enters the ears, this thing here-- so this is just a sound signal, so the amplitude just varies over time in some particular way. You take that signal, you pass it through this bank of bandpass filters, and then the output of each of these filters is a filtered version of the original signal. And so in engineering terms, we call that a subband. So this would be the result of taking that sound signal and filtering it with a filter that's tuned to relatively low frequencies, 350 to 520 Hertz, in this particular case. And so you can see that the output of that filter is a signal that varies relatively slowly. So it wiggles up and down, but you can see that the wiggles are in some sort of confined space of frequencies. If we go a little bit further up, we get the output of something that's tuned to slightly higher frequencies. And you can see that the output of that filter is wiggling at a faster rate. And then if we go up further still, we get a different thing, that is again wiggling even faster. And so collectively, we can take this original broadband signal, and then represent it with a whole bunch of these subbands. Typically, you might use 30, or 40, or 50. So one thing to note here, you might have noticed that there's something funny about this picture and that the filters, here, which are indicated by these colored curves, are not uniform. Right? So the ones down here are very narrow, and the ones down here are very broad. And that's not an accident. That's roughly what you find when you actually look in the ear. And why things are that way is something that you could potentially debate. But it's very clear, empirically, that that's roughly what you find. I'm not going to say too much more about this now, but remember that because it will become important a little bit later on. So we can take these filters, and turn that into an initial stage of an auditory model, the stuff that we think is happening in the early auditory system where we've got our sound signal that gets passed through this bag of bandpass filters. And you're now representing that signal as a bunch of different subbands, just two of which are shown here for clarity. And the frequency selectivity that you find in the ear has a whole host of perceptual consequences. I won't go through all of them exhaustively. It's one of the main deterrents of what masks what. So for instance, when you're trying to compress a sound signal by turning it into an mp3, you have to really pay attention to the nature of these filters. And, you know, you don't need to represent parts of the filters that would be-- sorry, parts of the signal would be masked, and these filters tell you a lot about that. One respect in which frequency selectivity is evident is by the ability to hear out individual frequency components of sounds that have lots of frequencies in them. So this is kind of a cool demonstration. And to kind of help us see what's going on, we're going to look at a spectrogram of what's coming in. So hopefully this will work. So what this little thing is doing is there's a microphone in the laptop, and it takes that microphone signal and turns it into a spectrogram. It's using a logarithmic frequency scale here, so it goes from about 100 Hertz up to 6400. And so if I don't say anything, you'll be able to hear the room noise, or see the room noise. All right, so that's the baseline. And also, the other thing to note is that the microphone doesn't have very good bass response. And so the very low frequencies won't show up. But everything else will. OK. [AUDIO PLAYBACK] - Canceled harmonics. A complex tone is presented, followed by several cancellations and restorations of a particular harmonic. This is done for harmonics 1 through 10. [LOUD TONE] [TONE] [TONE] END PLAYBACK] OK. And just to be clear, the point of this, right, is that what's happening here-- just stop that for a second-- is you're hearing what's called a complex tone. That just means a tone that has more than one frequency. All right? That's what constitutes complexity for a psychoacoustician. So it's a complex tone. And each of the stripes, here, is one of the frequencies. So this is a harmonic complex. Notice that the fundamental frequency is 200 Hertz, and so all the other frequencies are integer multiples of that. So there's 400, 600, 800, 1,000 1200, and so on, and so forth. OK? Then what's happening is that in each little cycle of this demonstration, one of the harmonics is getting pulsed on and off. All right? And the consequence of it being pulsed on and off is that you're able to actually hear it as, like, a distinct thing. And the fact that that happens, that's not, itself, happening in the ear. That's something complicated and interesting that your brain is doing with that signal it's getting from the ear. But the fact you're able to do that is only possible by virtue of the fact that the signal that your brain gets from the ear divides the signal up in a way that kind of preserves the individual frequencies. All right? And so this is just a demonstration that you're actually able to, under appropriate circumstances, to hear out particular frequency components of this complicated thing, even if you just heard it by itself, it would just sound like one thing. So another kind of interesting and cool phenomenon that is related to frequency selectivity is the perception of beating. So how many people here know what beating is? Yeah, OK. So beating is a physical phenomenon that happens whenever you have multiple different frequencies that are present at the same time. So in this case, those are the red and blue curves up at the top. So those are sinusoids of two different frequencies. And the consequence of them being two different frequencies is that over time, they shift in and out of phase. And so there's this particular point here where the peaks of the waveforms are aligned, and then there's this point over here where the peak of one aligns with the trough of the other. It's just because they're two different frequencies and they slide in and out of phase. And so when you play those two frequencies at the same time, you get the black waveform, so some linearly. That's what sounds do when they're both present at once. And so the point at which the peaks align, there is constructive interference. And the point at which the peak and the trough align, there is destructive interference. And so over time, the amplitude waxes and wanes. And so physically, that's what's known as beating. And the interesting thing is that the audibility of the beating is very tightly constrained by the cochlea. So here's one frequency. [TONE] Here's the other. [TONE] And then you play them at the same time. [TONE] Can you hear that fluttering kind of sensation? All right. So that's amplitude modulation. OK. And so I've just told you how we can think of the cochlea as this set of filters, and so it's an interesting empirical fact that you only hear beating if the two frequencies that are beating fall roughly within the same cochlear bandwidth. OK? And so when they're pretty close together, like, one semi-tone, the beating is very audible. [TONE] But as you move them further apart-- so three semi-tones is getting close to, roughly, what a typical cochlear filter bandwidth would be, and the beating is a lot less audible. [TONE] And then by the time you get to eight semi-tones, you just don't hear anything. It just sounds like two tones. [TONE] Very clear. So contrast that with-- [TONE] All right. So the important thing to emphasize is that in all three of these cases, physically, there's beating happening. All right? So if you actually were to look at what the eardrum was doing, you would see the amplitude modulation here, but you don't hear that. So this is just another consequence of the way that you're cochlea is filtering sound. OK. All right, so we've got our auditory model, here. What happens next? So there's a couple of important caveats about this. And I mentioned this, in part, because some of these things are-- we don't really know exactly what the true significance is of some of these things, especially in computational terms. So I've just told you about how we typically will model with the cochlea is doing as a set of linear bandpass filters. So you get a signal, you apply a linear filter, you get a subband. But in actuality, if you actually look at what the ear is doing, it's pretty clear that linear filtering provides only an approximate description of cochlear tuning. And in particular, this is evident when you change sound levels. So what this is a plot of is tuning curves that you would measure from an auditory nerve fiber. So we've got spikes per second on the y-axis, we've got the frequency of a pure tone stimulus on the x-axis, and each curve plots the response at a different stimulus intensity. All right? So down here at the bottom, we've got 35 dB SPL, so that's, like, a very, very low level, like, maybe if you rub your hands together or something, that would be close to 35. And 45-- and you can see here that the tuning is pretty narrow here at these low levels. So 45 dB SPL. So you get a pretty big response, here, at looks like, you know, 1700 Hertz. And then by the time you go down half an octave or something, there's almost no response. But as the stimulus level increases, you can see that the tuning broadens really very considerably. And so up here at 75 or 85 dB, you're getting a response from anywhere from 500 Hertz out to, you know, over 2,000 Hertz. That's, like, a two octave range, right? So the bandwidth is growing pretty dramatically. And this is very typical. Here is a bunch of examples of different nerve fibers that are doing the same thing. So at high levels, the tuning is really broad. At low levels, it's kind of narrow. And so mechanistically, in terms of the biology, we have a pretty good understanding of why this is happening. So what's going on is that the outer hair cells are providing amplification of the frequencies kind of near the characteristic frequency of the nerve fiber at low levels, but not at high levels. And so at high levels, what you're seeing is just very broad kind of mechanical tuning. But what really sort of remains unclear is what the consequences of this are for hearing, and really, how to think about it in computational terms. So it's clear that the linear filtering view is not exactly right. And one of the interesting things is that it's not like you get a lot worse at hearing when you go up to 75 or 85 dB. In fact, if anything, most psychophysical phenomena, actually, are better in some sense. This phenomena, as I said, is related to this distinction between inner hair cells and outer hair cells. So the inner hair cells are the ones that, actually, are responsible for the transduction of sound energy, and the outer hair cells we think of as part of a feedback system that amplifies the motion of the membrane and sharpens at the tuning. And that amplification is selective for frequencies at the characteristic frequency, and really occurs at very low levels. OK. So this is related to what Hynek was mentioning and the question that was asked earlier. So there's this other important response property of the cochlea, which is that for frequencies that are sufficiently low, auditory nerve spikes are phase locked to the stimulus. And so what you're seeing here is a single trace of a recording from a nerve fiber that's up top, and then at the bottom is the stimulus that would be supplied to the air, which is just a pure tone of some particular frequency. And so you can note, like, two interesting things about this response. The first is that the spikes are intermittent. You don't get a spike at every cycle of the frequency. But when the spikes occur-- sorry, they occur at a particular phase relative to the stimulus. Right? So they're kind of just a little bit behind the peak of the waveform in this particular case, right, and in every single case. All right, so this is known as phase locking. And it's a pretty robust phenomena for frequencies under a few kilohertz. And this is in non-human animals. There's no measurements of the auditory nerve in humans because to do so is highly invasive, and just nobody's ever done it, and probably won't ever. So this is another example of the same thing, where again, this is sort of the input waveform, and this is a bunch of different recordings of the same nerve fiber that are time aligned. So you can see the spikes always occur at a particular region of phase space. So they're not uniformly distributed. And the figure here on the right shows that this phenomenon deteriorates at very high frequency. So this is the plot of the strength of the phase locking as a function of frequency. And so up to a kilohertz, it's quite strong, and then it starts to kind of drop off, and above about 4k there's not really a whole lot. OK. And so as I said, one of the other salient things is that the fibers don't fire with every cycle of the stimulus. And one interesting fact about the ear is that there are a whole bunch of auditory nerve fibers for every inner hair cell. And people think that one of the reasons for that is because of this phenomena, here. And this is probably due to things like refractory periods and stuff, right? But if you have a whole bunch of nerve fibers synapsing under the same hair cell, the idea is that, well, collectively, you'll get spikes at every single cycle. So here's just some interesting numbers. So the number of inner hair cells per ear in a human is estimated to be about 3500. There's about four times as many outer hair cells, or roughly 12,000, but that's the key number here. So coming out of every ear are roughly 30,000 auditory nerve fibers. So there's about 10 times as many auditory nerve fibers as there are inner hair cells. And it's interesting to compare those numbers to what you see in the eye. So these are estimates from a few years ago, from the eye of roughly 5 million cones per eye, lots of rods, obviously. But then you go to the optic nerve, and the number of optic nerve fibers is actually substantially less than the number of cones. So 1 and 1/2 million. So there's this big compression that happens when you go into the auditory nerve-- sorry, in the optic nerve, whereas there's an expansion that happens in the auditory nerve. And just for fun, these are rough estimates of what you find in the cortex. So in primary auditory cortex, this is a very crude estimate I got from someone a couple of days ago, of 60 million neurons per hemisphere. And in v1, the estimate that I was able to find was 140 million. So these are sort of roughly the same order of magnitude, although it's obviously smaller in the auditory system. But there is something very different happening here, in terms of the way the information is getting piped from the periphery onto the brain. And one reason for this might just be the fact that phenomena here, where the spiking in an individual auditory nerve fiber is going to be intermittent because the signals that it has to convey are very, very fast, and so you kind of have to multiplex in this way. All right, so the big picture here is that if you look at the output of the cochlea, there are, in some sense, two cues to the frequencies that are contained in a sound. So there's what is often referred to as the place of excitation in the cochlea. So these are the nerve fibers that are firing the most, according to a rate code. And there's also the timing of the spikes that are fired, in the sense that, for frequencies below about 4k, you get phase locking. And so the inner spiked intervals will be stereotyped, depending on the frequencies that come in. And so it's one of these sort of-- I still find this kind of remarkable when I sort of step back and think about the state of things, that this is a very basic question about neural coding at the very front end of the system. And the importance of these things really remains unresolved. So people have been debating this for a really long time, and we still don't really have very clear answers to the extent to which the spike timing really is critical for inferring frequency. So I'm going to play you a demo that provides-- it's an example of some of the circumstantial evidence for the importance of phase locking. And broadly speaking, the evidence for the importance of phase locking in humans comes from the fact that the perception of frequency seems to change once you kind of get above about 4k. So for instance, if you give people a musical interval, [SINGING] da da, and then you ask them to replicate that, in, say, different octaves, people can do that pretty well until you get to about 4k. And above 4k, they just break down. They become very, very highly variable. And that's evident in this demonstration. So what you're going to hear in this demo, and this is a demonstration I got from Peter Cariani. Thanks to him. It's a melody that is probably familiar to all of you that's being played with pure tones. And it will be played repeatedly, transposed from very high frequencies down in, I don't know, third octaves or something, to lower and lower frequencies. And what you will experience is that when you hear the very high frequencies, well, A, they'll be kind of annoying, so just bear with me, but the melody will also be unrecognizable. And so it'll only be when you get below a certain point that you'll say, aha, I know what that is. OK? And again, we can look at this in our spectrogram, which is still going. And you can actually see what's going on. [MELODIC TONES] OK. So by the end, hopefully everybody recognized what that was. So let's just talk briefly about how these subbands that we were talking about earlier relate to what we see in the auditory nerve, and again, this relates to one of the earlier questions. So the subband is this blue signal here. This is the output of a linear filter. And one of the ways to characterize a subband like this that's band limited is by the instantaneous amplitude and the instantaneous phase. And these things, loosely, can be mapped onto a spike rate and spike timing in the auditory nerve. So again, this is an example of the phase locking that you see, where the spikes get fired at some particular point in the waveform. And so if you observe this, well, you know something about exactly what's happening in the waveform, namely, you know that there's energy there in the stimulus because you're getting spikes, but you also actually know the phase of the waveform because the spikes are happening in particular places. And so this issue of phase is sort of a tricky one, because it's often not something that we really know how to deal with. And it's also just empirically the case that a lot of the information in sound is carried by the way that frequencies are modulated over time, as measured by the instantaneous amplitude in a subband. And so the instantaneous amplitude is measured by a quantity called the envelope. So that's the red curve, here, that shows how the amplitude waxes and wanes. And the envelope is easy to extract from auditory nerve response, just by computing the firing rate over local time windows. And in signal processing terms, we typically extract it with something called the Hilbert transform, by taking the magnitude of the analytic signal. So it's a pretty easy thing to pull out in MATLAB, for instance. So just to relate this to stuff that may be more familiar, the spectrograms that people have probably seen in the past-- again, these are pictures that take a sound waveform and plot the frequency content over time. One way to get a spectrogram is to have a bank of bandpass filters, to get a bunch of subbands, and then to extract the envelope of each subband, and just plot the envelope in grayscale, horizontally. All right? So a stripe through this picture is the envelope in grayscale. So it's black in the places where the energy is high, and white in the places where it's low. And so this is a spectrogram of this. [DRUMMING] All right, so that's what you just heard. It's just a drum break. And you can probably see that there are sort of events in the spectrogram that correspond to things like the drumbeats. Now, one of the other striking things about this picture is it looks like a mess, right? I mean, you listen to that drum break and it sounded kind of crisp and clean, and when you actually look at the instantaneous amplitude in each of the subbands, it just sort of looks messy and noisy. But one interesting fact is that this picture, for most signals, captures all of the information that matters perceptually in the following sense, that if you have some sound signal and you let me generate this picture, and all you let me keep is that picture. We throw out the original sound waveform. From that picture, I can generate a sound signal that will sound just like the original. All right? And in fact, I've done this. And here is a reconstruction from that picture. [DRUMMING] Here's the original. [DRUMMING] Sounded exactly the same. OK? And so-- and I'll tell you in a second, how we do this, but the fact that this picture looks messy and noisy is, I think, mostly just due to the fact that your brain is not used to getting the sensory input in this format. Right? You're used to hearing as sound. Right? And so your visual system is actually not optimally designed to interpret a spectrogram. I want to just briefly explain how you take this picture and generate a sound signal because this is sort of a useful thing to understand, and it will become relevant a little bit later. So the general game that we play here is you hand me this picture, right, and I want to synthesize a signal. And so usually what we do, is we start out with a noise signal, and we transform that noise signal until it's in kind of the right representation. So we split it up into its subbands. And then we'll replace the envelopes of the noise subbands by the envelopes from this picture. OK? And so the way that we do that is we measure the envelope of each noise subband, we divide it out, and then we multiply by the envelope of the thing that we want. And that gives us new subbands. And we then add those up, and we get a new sound signal. And for various reasons, this is a process that needs to be iterated. So you take that new sound signal, you generate its subbands, you replace their envelopes by the ones you want, and you add them back up to get a sound signal. And if you do this about 10 times, what you end up with is something that has the envelopes that you want. And so then you can listen to it. And that's the thing that I played you. OK? So it's this iterative procedure, where you typically start with noise, you project the signal that you want onto the noise, collapse, and then iterate. OK. And we'll see some more examples of this in action. So I just told you how the instantaneous amplitude in each of the filter outputs, which we characterize with the envelope, is an important thing for sound representation. And so in the auditory model that I'm building here, we've got a second stage of processing now, where we've taken the subbands and we extract the instantaneous amplitude. And there's one other thing here that I'll just mention, which is that another important feature of the cochlea is what's known as amplitude compression. So the response of the cochlea as a function of sound level is not linear, rather it's compressive. And this is due to the fact that there is selective amplification when sounds are quiet, and not when they're very high in intensity. And I'm not going to say anything more about this for now, but it will become important later. And this is something that has a lot of practical consequences. So when people lose their hearing, one of the common things that happens is that the outer hair cells stop working correctly. And the outer hair cells are one of the things that generates that compressive response. So they're the nonlinear component of processing in the cochlea. And so when people lose their hearing, the tuning both broadens, and you get a linear response to sound amplitude because you lose that selective amplification, and that's something that hearing aids try to replicate, and that's hard to do. OK. So what happens next? So everybody here-- does everybody here know what a spike triggered average is? Yeah. Lots of people talked about this, probably. OK. So one of the standard ways that we investigate sensory systems, when we have some reason to think that things might be reasonably linear, is by measuring something called a spike triggered average. And so the way this experiment might work is you would play stimulus like this. [NOISE SIGNAL] So that's a type of a noise signal. Right? So you'd play your animal that signal, you'd be recording spikes from a neuron that you might be interested in, and every time there's a spike, you would look back at what happens in the signal and then you would take all the little histories that preceded that spike. You'd average them together, and you get the average stimulus that preceded a spike. And so in this particular case, we're going to actually do this in the domain of the spectrogram. And that's because you might hypothesize that, really, what the neurons would care about would be the instantaneous amplitude in the sound signal, and not necessarily the phase. So if you do that, for instance, in the inferior colliculus, the stage of the midbrain, you see things that are pretty stereotyped. And we're going to refer to what we get out of this procedure as a spectrotemporal receptive field. And that would often be referred to as a STRF for short. So if you hear people talk about STRFs, this is what they mean. OK. So these are derived from methods that would be like the spike triggered average. People don't usually actually do a spike triggered average for various reasons, but what you get out is similar. And so what you see here is the average spectrogram that would be preceding a spike. And for this particular neuron, you can see there's a bit of red and then a bit of blue. And so at this particular frequency, which, in this case, is something like 10 kilohertz or something like that, the optimal stimulus that would generate a spike is something that gives you an increase in energy, and then a decrease. And so what this corresponds to is amplitude modulation at a particular rate. And so you can see there's a characteristic timing here. So the red thing has a certain duration, and then there's the blue thing. And so there is this very rapid increase and decrease in energy at that particular frequency. So that's known as amplitude modulation. And so this is one way of looking at this in the domain of the spectrogram. Another way of looking at this would be to generate a tuning function as a function of the modulation rate. So you could actually change how fast the amplitude is being modulated, and you would see in a neuron like this, that the response would exhibit tuning. And so each one of the graphs here, or each one of the plots, the dashed curves in the lower right plot-- so each dashed line is a tuning curve for one neuron. So it's a plot of the response of that neuron as a function of the temporal modulation frequencies. So that's how fast the amplitude is changing with time. And if you look at this particular case here, you can see that there's a peak response when the modulation frequency is maybe 70 Hertz, and then the response decreases if you go in either direction. This guy here has got a slightly lower preferred frequency, and down here, lower still. And so again, what you can see is something that looks strikingly similar to the kinds of filter banks that we just saw when we were looking at the cochlea. But there's a key difference here, right, that here we're seeing tuning to modulation frequency, not to audio frequency. So this is the rate at which the amplitude changes. That's the amplitude of the sound, not the audio frequency. So there's a carrier frequency here, which is 10k, but the frequency we're talking about here is the rate at which this changes. And so in this case here, you can see the period here is maybe 5 milliseconds, and so this would correspond to a modulation of, I guess, 200 Hertz. Is that right? I think that's right. Yeah. OK. And so as early as the midbrain, you see stuff like this. So in the inferior colliculus, there's lots of it. And so this suggests a second, or in this case, a third stage of processing, which are known as modulation filters. This is a very old idea in auditory science that now has a fair bit of empirical support. And so the idea is that in our model, we've got our sound signal. It gets passed through this bank of bandpass filters, you get these subbands, you extract the instantaneous amplitude, known as the envelope, and then you take that envelope and you pass it through another filter bank. This time, these are filters that are tuned in modulation frequency. And the output of those filters-- again, it's exactly conceptually analogous to the output of this first set of band pass filters. So in this case, we have a filter that's tuned to low modulation rates, and so you can see what it outputs is something that's fluctuating very slowly. So it's just taken the very slow fluctuations out of that envelope. Here, you have a filter that's tuned to higher rates, and so it's wiggling around at faster rates. And you have a different set of these filters for each cochlear channel, each thing coming out of the cochlea. So this picture here that we've gradually built up gives us a model of the signal processing that we think occurs between the cochlea and the midbrain or the thalamus. So you have the bandpass filtering that happens in the cochlea, you get subbands, you extract the instantaneous amplitude in the envelopes, and then you filter that again with these modulation filters. This is sort of a rough understanding of the front end of the auditory system. And the question is, given these representations, how do we do the interesting things that we do with sound? So how do we recognize things and their properties, and do scene analysis, and so on, and so forth. And this is still something that is still very much in its infancy in terms of our understanding. One of the areas that I've spent a bit of time on to try to get a handle on this is sound texture. And I started working on this because I sort of thought it would be a nice way in to understanding some of these issues, and because I gather that Eero was here talking about visual textures. I was asked to feature this, and hopefully this will be useful. So what are sound textures? Textures are sounds that result from large numbers of acoustic events, and they include things that you hear all the time, like rain-- [RAIN] --or birds-- [BIRDS CHIRPING] --or running water-- [RUNNING WATER] --insects-- [INSECTS] --applause-- [APPLAUSE] --fire, so forth. [FIRE BURNING] OK. So these are sounds that they, typically, are generated from large numbers of acoustic events. They're very common in the world. You hear these things all the time. But they've been largely unstudied. So there's been a long and storied history of research on visual texture, and this really had not been thought about very much until a few years ago. Now, a lot of the things that people typically think about in hearing are the sounds that are produced by individual events. And if we have time, we'll talk about this more later, but stuff like this. AUDIO: January. JOSH MCDERMOTT: Or this. [SQUEAK] And these are the waveforms associated with those events. And the point is that those sounds, they have a beginning, and an end, and a temporal evolution. And that temporal evolution is sort of part of what makes the sound what it is, right? And textures are a bit different. So here's just the sound of rain. [RAIN] Now of course, at some point, the rain started and hopefully at some point it will end, but the start and the end are not what makes it sound like rain, right? The qualities that make it sound like rain are just there. So the texture is stationary. So the essential properties don't change over time. And so I got interested in textures because it seemed like stationarity would make them a good starting point for understanding auditory representation because, in some sense, it sort of simplifies the kinds of things you have to worry about. You don't have to worry about time in quite the same way. And so the question that we were interested in is how people represent and recognize texture. So just to make that concrete, listen to this. [CHATTER] And then this. [CHATTER] So it's immediately apparent to you that those are the same kind of thing, right? In fact, they're two different excerpts of the same recording. But the waveform itself is totally different in the two cases. So there's something that your brain is extracting from those two excerpts that tells you that they're the same kind of thing, and that, for instance, they're different from this. [BEES BUZZING] So the question is, what is it that you extract and store about those waveforms that tells you that certain somethings are the same and other things are different, and that allows you to recognize what things are? And so the key theoretical proposal that we made in this work is that because they're stationary, textures can be captured by statistics that are time averages of acoustic measurements. So the proposal is that when you recognize the sound of fire, or rain, or what have you, you're recognizing these statistics. So what kinds of statistics might we be measuring if you think this proposal has some plausibility? And so part of the reason for walking you through this auditory model is that whatever statistics the auditory system measures are presumably derived from representations like this, right, that constitute the input that your brain is getting from the auditory periphery. And so we initially asked how far one might get with representations consisting of fairly generic statistics of these standard auditory representations, things like marginal moments and correlations. So the statistics that we initially considered were not, in any way, specifically tailored to natural sounds, and really, ultimately, what we'd like to do would be to actually learn statistics from data that, actually, we think are good representations. That's something we're working on. But these statistics are simple and they involve operations that you could instantiate in neurons, so it seemed like maybe a reasonable place to start to at least get a feel for what the landscape was like. And so what I want to do now is just to give you some intuitions as to what sorts of things might be captured by statistics of these representations. And so at a minimum, to be useful for recognition, well, statistics need to give you different values for different sounds. And so let's see what happens. So let's first have a look at what kinds of things might be captured by marginal moments of amplitude envelopes from bandpass filters. OK? So remember, the envelope, here, you can think of as a stripe through a spectrogram, right, so it's the instantaneous amplitude in a given frequency channel. So the blue thing is the subband, the red thing here is the envelope. And the marginal moments will describe the way the envelope is distributed. So imagine you took that red curve and you collapsed that over time to get a histogram that tells you the frequency of occurrence of different amplitudes in that particular frequency channel. And that's what it looks like for this particular example. And you might think that, well, this can't possibly be all that informative, right, because it's obviously got some central tendency and some spread, but when you do this business of collapsing across time, you're throwing out all kinds of information. But one of the interesting things is that when you look at these kinds of distributions for different types of sounds, you see that they vary a fair bit, and in particular, that they're systematically different for a lot of natural sounds than they are for noise signals. So this is that same thing, so it's this histogram here, just rotated 90 degrees. So we've got the probability of occurrence on the y-axis, and the magnitude in the envelope of one particular frequency channel on the x-axis, for three different recordings. So the red plots what you get for a recording of noise, the blue is a stream, and the green is a bunch of geese. Geese don't quack. Whatever the geese do. AUDIENCE: Honk. JOSH MCDERMOTT: Yeah, honking. Yeah. All right. And this is a filter that's centered at 2,200 Hertz. In particular, these examples were chosen because the average value of the envelope is very similar in the three cases. But you can see that the distributions have very different shapes. So the noise one, here, has got pretty low variance. The stream has got larger variance, and the geese larger still, and it's also positively skewed. So to got this kind of long tail, here. And if you look at the spectrograms associated with these sounds, you can see where this comes from. So spectogram of noise is pretty gray, so the noise signal kind of hangs out around its average value most of the time, whereas the stream has got more gray and more black, and the geese actually has some white and some dark black. So here, white would correspond to down here, black would correspond to up here. And the intuition here is that natural signals are sparse. In particular, they're sparser than noise. So we think of natural signals as often being made up of events like raindrops, or geese calls, and these events are infrequent, but when they occur they produce large amplitudes in the signal. And when they don't occur, the amplitude is lower. In contrast, the noise signal doesn't really have those. But the important point about this from the standpoint of wanting to characterize a signal with statistics is that this phenomenon of sparsity is reflected in some pretty simple things that you could compute from the signal, like the variance and the skew. So you can see that the variance varies across these signals, as does a skew. All right. Let's take a quick look at what you might get by measuring correlations between these different channels. So these things also vary across sound. And you can see them in the cochleogram here, in this particular example, as reflected in these vertical streaks. So this is the cochleogram of fire, and fire's got lots of crackles and pops. [FIRE BURNING] And those crackles and pops show up as these vertical streaks in the cochleogram. So a crackle and pop is like a click-like event. Clicks have lots of different frequencies in them, and so you see these vertical streaks in the spectrogram, and that introduces statistical dependencies between different frequency channels. And so that can be measured by just computing correlations between the envelopes of these different channels, and that's reflected in this matrix here. So every cell of this matrix is the correlation between a pair of channels. So we're going from low frequencies, here, to high, and low to high. The diagonal has got to be one, but the off-diagonal stuff can be whatever. You can see for example of fire, there's a lot of yellow and a lot of red, which means that the amplitudes of different channels tends to be correlated. But this is not the case for everything. So if you look at a water sound, like a stream-- [STREAM] --there are not very many things that are click-like, and most of the correlations here are pretty close to zero. So again, this is a pretty simple thing that you can measure, but you get different values for different sounds. Similarly, if we look at the power coming out of these modulation filters, you also see big differences across sounds. So that's plotted here for three different sound recordings, insects, waves, and a stream. And so remember that these modulation filters, we think of them as being applied to each cochlear channel. So the modulation power that you would get from all your modulation filters is-- you can see that in a 2D plot. So this is the frequency of the cochlear channel, and this is the rate of the modulation channel. So these are slow modulations and these are fast modulations. And so for the insects, you can see that there are these, like, little blobs up here. And that actually corresponds to the rates at which the insects rub their wings together and make the sound that they make, which kind of gives it this shimmery quality. [INSECTS] In contrast, the waves have got most of the power here at very slow modulations. [WAVES] And the stream is pretty broadband, so there's modulations at a whole bunch of different rates. [STREAM] All right. So just by measuring the power coming out of these channels, we're potentially learning something about what's in the signal. And I'm going to skip this last example. All right, so the point is just that when you look at these statistics, they vary pretty substantially across sound. And so the question we were interested in is whether they could plausibly account for the perception of real world textures. The key methodological proposal of this work was that synthesis is a potentially very powerful way to test a perceptual theory. Now, maybe the kind of standard thing that you might think that you might try to do with this type of representation is measure these statistics, and then see whether, for instance, you could discriminate between different signals or maybe [AUDIO OUT].. And for various reasons, I actually think that synthesis is potentially a lot more powerful. And the notion is this, that if your brain is representing sounds with some set of measurements, then signals that have the same values of those measurements ought to sound the same to you. And so in particular, if we've got some real world recording, and we synthesize a signal to cause it to have the same measurements, the statistics in this case, as that real world recording, well, then the synthetic signal ought to sound like the real world recording if the measurements that we use are like the ones that the brain is using to represent sound. And so we can potentially use synthesis, then, to test a candidate representation, in this case, these statistics that we're measuring, are a reasonable representation for the brain to be using. So here's just a simple example to kind of walk you through the logic. So let's suppose that you had a relatively simple theory of sound texture perception, which is that texture perception might be rooted in the power spectrum. It's not so implausible. Lots of people think the power spectrum is sort of a useful way to characterize it [AUDIO OUT].. And you might think that it would have something to do with the way textures would sound. And so in the context of our auditory model, the power spectrum is captured by the average value of each envelope. Remember, the envelope's telling you the instantaneous amplitude, and so if you just average that, you're going to find out how much power is in each frequency channel. So that's how you would do it in this framework. So the way this would work, if you get some sound signal, say, this-- [BUBBLES] --you would pass it through the model, you'd measure the average value of the envelope, so you get a set of 30 numbers. Say, if you have 30 bandpass filters there, at the output of each one of those, you get the average of all of the envelopes, so you get 30 numbers. And then you take those 30 numbers and you want to synthesize the signal, subject to the constraint of it having the same value for those 30 numbers. And so in this case, it's pretty simple to do. So we take a noise signal, we want to start out as random as possible, so we take a noise signal, we generate it's subbands, and then we just scale the subbands up or down so that they have the right amount of power. And we add them back up, and we get a new sound signal. And then we listen to it and we see whether it sounds like the same thing. OK? Here's what they sound like. And as you will hear, they just sound like noise, basically, right? So this is supposed to sound like rain-- [RAIN] --or a stream-- [STREAM] --or bubbles-- [BUBBLES] --or fire-- [FIRE BURNING] --or applause. [APPLAUSE] So you might notice that they sound different, right? And you might have even been able to convince yourself, well, that sounds a little bit like applause, right? So there's something there. But the point is that they don't sound anything like-- [APPLAUSE] --or-- [FIRE BURNING] --so on, and so forth. OK? All right, so the point is that everything just sounds like noise, and so what this tells us is that our brains are not simply registering the spectrum when we recognize these textures. Question is whether additional simple statistics will do any better. And so we're going to play the same game, right, except we have a souped up representation that's got all these other things in it. And so the consequence of this is that the process of synthesizing something is less straightforward. So I was in Eero Simoncelli's at the lab at the time, and spent a while trying to get this to work, and eventually we got to work. But conceptually, the process is the same. So you have some original sound recording, rain, or what have you, you pass it through your auditory model, and then you measure some set of statistics. And then you start out with a noise signal, you pass it through the same model, and then measure its statistics, and those will in general be different from the target values. And so you get some error signal here, and you use that error signal to perform gradient descent on the representation of the noise in the auditory model. And so you cause the envelopes of the noise signal to change in ways that cause their statistics to move towards the target value. And there's a procedure here by which this is iterated, and I'm not going to get into the details. If you want to play around with it, there's a toolbox that's now available on the lab website if you want to do that, and it's described in more detail in that paper. All right. So the result, the whole point of this, is that we get a signal that shares the statistics of some real world sound. How do they sound? And remember, we're interested in this because if these statistics, if this candidate representation accounts for our perception of texture, well, then the synthetic signals ought to sound like new examples of the real thing. And the cool thing, and rewarding part of this whole thing, is that in many cases, they do. So I'm just going to play the synthetic versions. All of these were generated from noise, just by causing the noise to match the statistics of, in this case, rain-- [RAIN] --stream-- [STREAM] --bubbles-- [BUBBLES] --fire-- [FIRE BURNING] --applause-- [APPLAUSE] --wind-- [WIND BLOWING] --insects-- [INSECTS] --birds, oops-- [BIRDS CHIRPING] --and crowd noise. [CHATTER] All right. It also works for a lot of unnatural sounds. Here's rustling paper-- [RUSTLING PAPER] --and a jackhammer. [JACKHAMMER] All right. And so the cool thing about this is you can put it whatever you want in there, right? You can measure the statistics from anything, and you can generate something that is statistically matched. And when you do this with a lot of textures, you tend to get something that captures some of the qualitative properties. And so the success of this, and this is in this case, the reason why this is scientifically interesting in addition to fun, is that it lends plausibility to the notion that these statistics could underlie the representation and recognition of textures. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Seminar_9_Surya_Ganguli_Statistical_Physics_of_Deep_Learning.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. SURYA GANGULI: I'm going to talk about statistical physics of deep learning, essentially. So this is some ongoing work in my lab that was really motivated by trying to understand how neural networks and infants learn categories. And then it sort of led to a bunch of results in deep learning that involved statistical physics. So I wanted to just introduce my lab a little bit. I'm an interloper from the Methods in Computational Neuroscience Summer School where I tend to spend a month. And so there, you know, the flavor of research that we do there and the flavor of research we do in our lab is sort of drilling down into neural mechanisms underlying well-defined computations. And we've been working on that. You know, I spent a lot of time talking to neurophysiologists especially at Stanford where I am. So we have a whole bunch of collaborations going on now involving understanding neural computation. So, for example, the retina itself is actually a deep neural circuit already, because there's an intervening layer of neurons, the bipolar cells and amacrine cells, that intervene between the photoreceptors and the ganglion cells. And oftentimes, what we do is we shine light on the photo receptors and we measure the ganglion cells, but we have no clue what's going on in the interior. So we've developed computational methods that can successfully computationally reconstruct what's going on in the interior of this putative deep neural network even though we don't have access to these things. And we actually infer the existence and properties of intermediate neurons here. And those properties are sort of similar to what's been previously recorded when people do directly record from, say, the bipolar cells. In the Clandinin Lab, we've sort of been unraveling the computations underlying a motion vision. So you know when you swat a fly, it's really hard. Because they can really quickly detect motion coming towards it. And they fly away. You know, so there's been lots of work on what kinds of algorithms might underlie motion estimation-- for example, the Reichardt correlator the, Barlow-Levick model and so forth. We've been applying systems identification techniques to whole brain calcium imaging data-- well, whole brain meaning from the fly visual circuit. And we just literally identify the computation. And we find that it's sort of none of the above. It's a mixture of all previous approaches. So grid cells-- we have some results on grid cells. So these are these famous cells that resulted in a Nobel Prize recently. We can actually show that these grid cells maintain their spatial coherence, because the rat and the mouse are always interacting with the boundaries. And the boundaries actually correct the rat's internal estimate of position. And were it not for these interactions with the boundaries, the grid cells would rapidly decohere on the time scale of minutes, you know, like less than a minute. And so it's actually quite interesting. We can show that whenever the rat encounters a boundary, it corrects its internal estimate of position perpendicular to the boundary. But it doesn't parallel. And it doesn't, because it can't. Because when it hits the boundary, it receives no information about where it is parallel to the boundary, but it receives information in this direction. And then with the Shenoy Lab, we've been looking at I think a really major conceptual puzzle in neuroscience. Why can we record from 100 neurons in a circuit containing millions of neurons? And do dimensionality detection and try to infer the state space dynamics of the circuit and claim that we've achieved success, right? We're doing dramatic undersampling recording 100 neurons out of millions. How would the state space dynamics that we infer change if we recorded more neurons? And we can show that, essentially, it will not change. Because we've come up with a novel connection between the act of neural measurements and the act of random projections. So we can show that the act of recording from 100 neurons in the brain is like the act of measuring 100 random linear combinations of all neurons in the relevant brain circuit. And then we can apply random projection theory to give us a predictive theory of experimental design that tells us, given the complexity of the task, how many neurons would you need to record to correctly recover the state-space dynamics of the circuit. And then in the Raymond Lab at Stanford, we've been looking at how enhancing synaptic plasticity can either enhance or impair learning depending on experience. So, for example, we think that synaptic plasticity underlies the very basis of our ability to learn and remember. So you might think that enhancing synaptic plasticity through various pharmacological or genetic modifications might enhance our ability to learn and remember. But previous results have been mixed. When you perturb synaptic plasticity, sometimes you enhance learning, sometimes you impair learning. So I believe in the Raymond Lab, they're the first to show that in the same subject enhancing syntactic plasticity, for example, in the cerebellum can either enhance or impair learning depending on the history of prior experience. And we can show that in order to explain the behavioral learning curves, you need much more complex postsynaptic dynamics than people naturally assume. We have to really promote our notion of what a synapse is from a single scalar, like a WIJ at a neural network to an entire dynamical system in its own right. So this relates to VOR learning. So this is sort of the low-level stuff that we've been doing. I know that in this course you guys study higher level stuff. So I'm not going to talk about any of these. Each of them could be like a one hour talk. But I wanted to discuss some more high level stuff that we've been doing. Oh, sorry. And sorry, the other sort of direction in our lab that we're looking at is actually the statistical mechanics of high dimensional data analysis. So this is, of course, very relevant in the age of the BRAIN Initiative and so on where we're developing very large scale data sets and we'd like to extract theories from this so-called big data. So the entire edifice of classical statistics is predicated on the assumption that you have many, many data points and you have a small number of features, right? So then it's very easy to see patterns in your data. But what's actually happening, nowadays, is that we have a large number of data points and a large number of features-- so for example, where you can record from 100 neurons using electrophysiology maybe for only 100 trials of any given trial type in a monkey doing some task. So the ratio of the amount of data to the number of features is something that's order one. So you know, the data sets are like three points in a three-dimensional space. That's the best we can visualize. So it's a significant challenge to do data analysis in this scenario. So it turns out there's beautiful connections between machine learning and data analysis and the statistical physics of systems of quenched disorder. So what I mean by that is in data analysis you want to learn statistical parameters by maximizing the log likelihood of the data given the parameters. In statistical physics, you often want to minimize. You know, this can be viewed as energy minimization. And so there's beautiful connections between these. And so we work on that. So we've applied this to compressed sensing. We've applied this to the problem of what's the optimal inference procedure in a regime, in a high-dimensional regime. We know that maximum likelihood is optimal in this regime. But something else is better in this regime. And we found what's better. It turns out to be a smooth maximum likelihood. And, of course, we've applied this to a theory of neural dimensionality and measurement. So, you know, there's lots of beautiful interactions between physics, machine learning, and neuroscience. And, you know, this school is a lot about that. If you're interested, actually, we wrote like a 70-page review on statistical mechanics of complex neural systems and high-dimensional data. We cover a whole bunch of things like spin glasses, the statistical mechanics of learning, random matrix theory, random dimensionality reduction, compressed sensing, and so on and so forth. It was our attempt to sort of put some systematic order on the diversity of topics viewed through the lens of statistical physics. But what do I want to talk about today? And why did I decide to branch out in the direction that I'm going to tell you about? Well, I think there's lots of motivations for the alliance between theoretical neuroscience, theoretical machine learning that lead to opportunities for physics, and math, and so on. So this is the question that should haunt all of us, right? The question is, what does it even mean to understand how the brain works or how a neural circuit works? OK? You know, that's an open question that we really have to come to terms with. A more concrete version of this question might be, or a specification of this question might be, we will understand this when we understand how the connectivity and dynamics of a neural circuit give rise to behavior and also how neural activity and synaptic learning rules conspire to self-organize useful connectivity that subserves behavior. OK? So, you know, various BRAIN Initiatives are promising to give us recording some large numbers of neurons and even give us the connectivity between those neurons. Now, what have theorists done? Often what theorists in computational neuroscience do is they often develop theories of random networks that have no function. But what we would like to do is we'd like to understand engineered networks that have function. So the field of machine learning has generated a plethora of learned neural networks that accomplish interesting functions. Yet we still don't have a meaningful understanding of how their connectivity, dynamics, the learning rule, the developmental experience-- OK. So basically, we can measure anything we want in these artificial neural networks, right? We can measure all the connectivity between all the neurons. We know the dynamics of all the neurons. We know the learning rule. We know the entire developmental experience of the network, because we know the training data that it was exposed to. Yet we still do not have a meaningful understanding of how they learn and how they work. And if we can't solve that problem, how are we ever going to understand the brain, right, in the form of this question? OK? So that was sort of what was motivating me to look into this. So this is the outline of the talk. The original entry point was trying to understand category learning in neural networks. And then at the end of the day, we made actually several theoretical advances that led to advances in machine learning and applications to engineering. So, for example, we found random weight initializations that make a network dynamically critical and allow very, very rapid training of deep neural networks. We were able to understand and exploit the geometry of high-dimensional error surfaces to, again, speed up learning, like training deep neural networks. And we were also able to exploit sort of recent work in non-equilibrium thermodynamics to learn complex probabilistic generative models. So it's a diversity of topics, but I'll walk you through them. And, you know, you can relax, because almost everything I'm going to talk about is published. OK. So let's start with the motivation, a mathematical theory of semantic development. I think this speaks to some of the high level stuff that you guys think about. This part could be called the misadventures of an applied physicists who found himself lost in the psychology department. So I just sort of showed up at Stanford. Jay's a great guy. I was talking to him. And I learned about his work. And I realized it didn't understand it. And this is my attempt to understand that work with Andrew and Jay. OK. So what is semantic cognition? So human semantic cognition, a rough definition of this field, is that we have an ability to learn, recognize, comprehend, and produce inferences about properties of objects and events in the world, especially properties that are not present in your current perceptual stimulus. So, for example, I can ask you does a cat fur and do birds fly, and you can answer these questions correctly despite the fact that there's currently no cat or bird in the room, right? So, you know, our ability to do this likely relies on our ability to form internal representations of categories in the external world and associate properties with those categories. Because we never see the same stimulus twice. So whenever we see a new stimulus or we try to recall information from our brain, we rapidly identify the relevant category that contains the information. And we use that categorical representation to guide future actions or give answers. So category formation is central to this, right? So what are the kinds of psychophysical tasks that people use to probe semantic cognition? So this is a very rich field. Psychologists have been working on this all the time. So one example is looking time studies to ascertain whether or not an infant can distinguish between two categories of objects at what age. So, for example, they'll show a sequence of objects from category one, say, horses. And the first time the infant sees a horse, the looking time will go up. And then it goes down over time. It gets bored. Then you show a cow. And if the infant is old enough, the looking time will go up and then go back down. And from that, we infer that the infant can distinguish between horses and cows. But if it's not old enough, the looking time will not go up. So as the infant gets older and older, it can make more and more fine scale discriminations between categories it turns out. So property verification tasks-- you can ask, can a canary move? Can it sing? And certain questions are answered quickly. Certain questions are answered late, which speaks to certain properties being central and peripheral to certain categories. Category membership queries-- is a sparrow a bird, or is an ostrich a bird? Again, there's different latencies. And that suggests that there are typical and atypical category members. And also, very, very important to us is inductive generalization. We can both generalize familiar properties to novel objects-- for example, a blick has feathers. Does it fly? Does it sing? And we can generalize novel properties to familiar objects. A bird has gene x. Does a crocodile have gene x? Does a dog have gene x? You know, so people have measured these patterns of inductive generalizations. And there's various theories that try to explain all of this stuff. So Jay has been working on this stuff from a neural network perspective. And he wrote a beautiful book called Semantic Cognition where he uses neural networks to explain a whole variety of phenomena especially, for example, the progressive differentiation of concepts. So let me just walk you through that. And so this was, you know, a first encounter with a deep neural network. So they were doing deep neural networks before they became popular. And so what they were doing was they asked, can we model the development of, say, concepts in infants? And so what they did was they had a toy data set where they had a bunch of objects and each object had a whole bunch of properties. So, for example, a canary can grow, move, fly, and sing, right? And so what they did was they exposed this deep neural network to training data of this form. You know, they had a whole bunch of features and questions and objects. And they just exposed the network to training data, trained it using back propagation. And they looked at the internal representations in the network, especially their evolution over developmental time or training time, right? And this is what they found. So initially, the network started with random weights. So there was no structure. OK. So what did they do here? They looked at the distances between the internal representations in this space. And they did hierarchical clustering or multidimensional scaling. And they found these plots or these plots, right? So what you see is, early in developmental time, the network first makes a coarse-grain discrimination between animals and plants, right? And then later, it makes finer scale discriminations. And then eventually when it's fully learned, it learns the hierarchical structure that's implicit in the training data. OK. And this is a multidimensional scaling plot where initially the animals move away from the plants, and then, you know, fish move away from birds, and trees move away from flowers. And then finally, individual discriminations are learned. So when I learned about this, I was at once excited and also mystified. Because this is sort of qualitatively behaving like the way that an infant behaves, yet it's a really stupid neural network with like five layers. Yet I don't understand how it's doing this, right? So I wanted a theory of what's going on here, right? Oh and by the way, you know, there's lots of reasons to believe that semantic relationships are encoded in the brain using relatively simple metrics like Euclidean distance between neural representations for different objects. So, for example, this is a famous study which I'm sure you've seen. What they showed was a whole bunch of objects to both monkeys and humans. And they clustered the objects or looked at similarity matrix of the objects measured using a Euclidean distance in neural electrophysiology space, so fine rates of neurons here and voxel activity patterns in the human. And they showed the same set of objects to monkey and human. And the matrices aligned, essentially. So basically, the similarity structure of internal representations of both monkey and human is the same. So we tend to encode semantic information using the similarity representations. So this is the hierarchical clustering view. So this sort of seems to actually happen in real live animals and humans. OK. There's actually something else that happens. It's that different properties are learned on different time scales. So, for example, the network can learn that canaries can move much more quickly than it learns that a canary is yellow. So some properties are much easier to learn than others. And the properties that are easier to learn for the network are also easier to learn for the infant. OK. So these are the theoretical questions we'd like to answer. What are the mathematical principles underlying the hierarchical self-organization of internal representations in the network? You know, this is a complex system. So what are the relative roles of the various ingredients? There's a non-linear input-output response. There's a learning rule, which is back propagation. There's the input statistics. Is the network somehow reaching into complex input statistics in the training set, or can it really rely on just second order statistics? You know, what is a mathematical definition of something called category coherence? And how does it relate to the speed of category learning? So what determines the speed at which we learn categories? Why are some properties learned more quickly than others? And how can we explain changing patterns of inductive generalization over these developmental timescales? OK. So how do we get a theory? Well, it turns out if you look at the activations of this network as it's training over time, so these are sigmoidal units. And the activations don't really hit the saturating regime that much during training, because you start from small weights. So we started with an audacious proposal that maybe even a linear neural network might exhibit this kind of learning dynamics. OK? Now, it's not at all obvious that it should, because it's a simple linear neural network. And this learning dynamics is highly non-linear, right? But it turns out that even in a linear neural network, the dynamics of learning on synaptic weight space is non-linear. And so there might be a hope that it might work and we might be able to get a coherent theory. OK. So what we did was we analyzed just a simple linear neural network that looks like this that goes from input layer to hidden layer to output layer. So the composite map is linear. OK. So we can write down dynamical equations and weight space for the learning dynamic. So this is the training data. And we can adjust the weights using back propagation. And these are the back propagation equations. And if we work in a limit where the learning is slow relative to the time it takes to cycle through the data set, you can take a continuous time limit, and you essentially get a non-linear set of equations in weight space, right? And the equations are cubic in the weights, right? And that's because the error is quartic in the weights, right? The error is the output minus w, w times the inputs squared. So the error is quartic in the weights. And so if you can differentiate the weights on the right-hand side, the gradient descent equations will be cubic in the weights. But there is one simplification that happens. Because the network is linear, it's learning dynamics is sensitive only to the second order statistics of the data, right? So in particular-- the input-input covariance matrix and the input-output covariance matrix. OK. So essentially, this network knows only about second order statistics. In our work here, the input statistics is white. So it's really only the input-output statistics that drives learning. OK? So this is a set of coupled non-linear differential equations. They're, in general, hard to solve. But we found solutions to them. We can express the solutions in terms of the singular value decomposition of the input-output covariance matrix. You know, any rectangular matrix has a unique singular value decomposition. In this context, we can think about the input-output covariance matrix as a matrix that maps input objects to feature attributes. And the singular vectors have an interpretation where these singular vectors essentially map objects into internal representations. The singular values amplify them. And then the columns of you are sort of feature synthesizers. The columns are sort of modes in the output feature space. OK. So this is the SVD. But the question is, how does this drive the learning dynamics? So what we did was we found exact solutions to the learning dynamics of this form where the product of the layer one to layer two weights and the layer two to layer three weights are of this form. Where, essentially, the system, what it's doing-- the composite system-- is building up the singular value decomposition of the input-output covariance matrix mode by mode. And each mode alpha, associated with singular value alpha in the training data, is being learned in the sigmoidal fashion. OK? So at time zero, A is sort of small and random, you know, some initial condition A zero. But over time, as time training time goes to infinity, the A approaches the actual singular value in the input-output covariance matrix. So basically, this is the learning dynamic. So nothing happens for a while. And then suddenly, the strongest singular mode defined by the largest single value gets learned. And then later on, a smaller singular mode gets learned. And later on, an even smaller singular mode gets learned. And the time it takes to learn each mode is governed by one over the singular value. So just intuitively, stronger statistical structure as quantified by singular value is learned first. That's the intuition. Often time when we train neural networks, we see sort of these plateaus in performance where the network does nothing and then suddenly drops, plateaus and drops. And this actually shows that. You can see very, very sharp transitions in learning. And you can actually show that the ratio of the transition period to the ignorance period can be arbitrarily small. Infants also seem to show these developmental transitions. OK. So, yeah, you can have arbitrarily sharp transitions in the system. OK. So the take-home messages so far is the network learns different modes of covariation between input and output on time scale inversely proportional to the statistical strength of that covariation. And you can get these sudden transitions in learning. Now the question is, what does this have to do with the hierarchical differentiation of concepts? All right, that's what we'd like to understand first. So now we've come up with a general theory of learning, the non-linear dynamics of learning in these deep circuits. Now we want to connect this back to hierarchical structure. So one of the things with Jay's work is that we're just working with toy data sets. And we didn't have any theoretical control over those toy data sets. But we sort of understood implicitly that these toy data sets have hierarchical structure. So we need a generative model of data, a controlled mathematically well-defined generative model of data that encodes the notion of hierarchy. OK? So can we move beyond specific data sets to general principles of what happens when a neural network is exposed to hierarchical structure? That's what we'd like to answer. So we consider a hierarchical generative model. And a classic hierarchical generative model is-- yeah, so essentially what we want to do is we want to connect the world of generative models to the world of neural networks. And, you know, that will connect the methods in computational neuroscience to this course eventually, right? Yeah. So we have data generated by some generative model. We take that data, and we expose it to a neural network. And we'd like to understand how the dynamics of learning depends on the parameters of the generative model. OK. So a natural generative model for defining hierarchical structure is a branching diffusion process that essentially mimics the process of evolution where properties diffuse down a tree and instantiate themselves across a set of items. So what do I mean by that? OK. OK. So basically imagine, for example, that your items are at the leaves of a tree, right? And you can imagine that this is a process of evolution where there is some ancestral state maybe for one property. So we do one property at a time. And the properties are independent of each other. This might be an ancestral state like can move, right? And then each time this property diffuses down the tree, there's a probability of flipping, OK? So maybe in this lineage which might correspond to animals, this doesn't flip, right? And in these lineages corresponding to plants, it does flip. So these things cannot move. And these things can move. And then maybe it doesn't flip. So all of these things inherit the property of moving. So these are the animals. And these things cannot move. So these are the plants. And then we do that for every single property independently. And we generate a set of feature vectors. So that's our generative model. So what are the statistical properties of the generative model? So essentially, because we know that we're analyzing these deeper linear networks and we know that the learning dynamics of such networks is driven only by the input-output covariance matrix, to understand the learning dynamics we just have to compute the singular values and singular vectors of hierarchically structured data generated in this fashion. And it's actually quite-- I mean, we did it. So here's what happened. So imagine a nice symmetric tree like this. So these are objects. If we look at the similarity structure of objects measured by dot product in the feature space generated by the features under this branching diffusion process, we get this nice blocks within blocks similarity structure where all the items on this branch-- you know this item and this item-- are slightly similar. This item and this item are even more similar. And, of course, each item is most similar to itself. So you have this hierarchical hierarchy of clusters that naturally arise because of this branching diffusion process. So what are the singular values and singular vectors of the associated input-output covariance matrix? Well, they turn out these are one set of singular vectors, the so-called object analyzers, which are functions across objects. There's another set of singular vectors that are functions across features, which I'm not showing you. But there's, of course, the duality, right? So that you get pairs of singular vectors for each single value. OK. So what's the singular vector associated with the largest singular value? Well, it's a uniform mode that's constant across all the objects. But the most interesting one, the next largest one, is the most lowest frequency function, essentially. It's constant along all the ancestors of this branch and a different constant along all the ancestors of this branch. So this singular vector, essentially, makes the most coarse grain discrimination in this hierarchically structured data set. The next set of singular vectors-- there's a pair of them-- discriminate between this set of objects and this set of objects and don't know about these ones. And the next one discriminates between this set of objects and this set of objects. And then as you go down to the smaller singular values, you get individual object discriminations, right? So this is how the hierarchical structure is reflected in the second order statistics of the data. And these are the singular values. So this is the theory for the singular values in a tree that has five levels of hierarchy. And you can see that the singular values decay with the hierarchy level of the singular vectors. OK. So there's a general theory for this in which singular vectors are associated with levels of the tree. OK? So now you can see the end of the story. If you put the two together, you automatically get the results that we were trying to explain, right? So essentially, the general theory of learning says that the network learns input-output modes on a time scale given by 1 over the singular value. When the data is hierarchically structured, singular values of broader hierarchical distinctions are larger than singular values of finer distinctions. And the input-output modes correspond exactly to hierarchical distributions of the tree. So that essentially says the network must learn broad scale discriminations before it can learn fine scale discriminations. So then actually what we did was we just analytically worked out that the dynamics of learning for hierarchically structured data. And we computed the multidimensional scaling. And this was theory. We never did a single simulation to get this plot. We generated a branching diffusion process that was essentially this one. And we just labeled these nodes arbitrarily with these labels. And this is what we get, right? So we see the multidimensional scaling plot that we sort of see here. And essentially, just to compare, this is what was done with a toy data set over which we had no theoretical control and a non-linear neural network over which we had no theoretical control. And this is a well-defined mathematical generative model under a linear neural network. And we see that we qualitatively explain the results. So this is the difference between simulation and theory, right? Now we have a conceptual understanding of effectively what was going on in this circuit. And now, it's no longer a mystery. So now I think I understand what Jay and collaborators were doing. It would be lovely if, for all of the stuff that's going on in this course, we could obtain such a deep rigorous understanding. It's much more challenging. But it's a goal worthy of pursuit I think. OK. So conclusions-- progressive differentiation of hierarchical structure is a general feature of learning in deep neural networks. It cannot be any other way. OK? Interestingly enough, deep, but not shallow, networks exhibit such stage-like transitions during learning. So if you just do no hidden layers, you don't get this, actually. You need a hidden layer to do this. And somewhat surprisingly, it turns out that even only the second order statistics of semantic properties provide powerful statistical signals that are sufficient to drive this non-linear learning dynamics, right? You don't need to look at the higher order statistics of the data to get this dynamic. Second order statistics suffice, which was not obvious before we started. OK. So in ongoing work, we can explain a whole bunch of things, like illusory correlations early in learning. So, for example, infants, they don't even know that, for example, pine trees don't have leaves. Then at an intermediate point, they think that pine trees have leaves. And then at a later point, they correctly know that pine trees don't have leaves, right? So we can explain these non-monotonic learning curves. We can explain these familiarity and typicality effects. We can explain inductive property judgments analytically. We're looking at basic level effects. We have a theory of category coherence, and so on. But in the interest of moving forward, I wanted to give short shrift to this stuff. And essentially, we can answer why are some properties learned faster? Basically, properties that are low frequency properties on the leaves of the tree get learned faster. Properties whose inner product with the singular vector as a larger singular value get learned faster. That's the story. Why are some items more typical? We have a theory for that. How is inductive generalization achieved by neural networks? We have a theory for that and so on. And, you know, what is a useful mathematical definition of category coherence? So, for example, you know, there are some things that are just intuitively called incoherent categories. "The set of all things are blue" is a very incoherent category. In fact, it's so incoherent we don't have a name for such a category. "The set of all things that are dogs" seems to be a very coherent category. And it's so coherent that we have a well-known name for it. The name's quite short actually, too. Actually, I wonder if there's a theory where shorter words correspond to more coherent categories and that's like an informative or efficient representation of category structure. But anyways, we have a natural definition of coherent category that's precise enough to prove a theorem that coherent categories are learned faster. And actually, this also relates to the size of the categories. So frequency effects show up. Anyways, so there's a lot of stuff there. But that was sort of the entry point. So now, what about a theory of learning in much deeper networks that have many, many layers? OK? So, again, I'm going to make a long story short, because it's all published. So you can read all the details. But I wanted to give you the spirit or the essence or the intuition behind the work. OK. So the questions we'd like to answer are, how does training time scale with depth? How should learning rate scale with depth? How do different weight initializations impact learning speed? And what we'll do is once we understand these theoretically, we'll find certain weight initializations that correspond to critical dynamics, which I'll define, can aid deep learning and generalization. So the basic idea is in a very, very deep neural network, right, you have a vanishing, exploding, or gradient problem. And that's one of the issues that makes deep neural network learning hard. So if you're going to back propagate the error through multiple layers, the back propagation operation is a product of Jacobians from layer to layer. And that product of Jacobians is fundamentally a linear mapping, right? So if the singular values associated with that linear mapping-- so essentially, if the singular values of the Jacobian in each layer are large, bigger than one, the product of such matrices will lead to a product matrix that has singular values that grow exponentially with depth. Similarly, if the single values are less than one, they'll decay with depth, right? So that's a vanishing gradient in the latter case and an exploding gradient in the former case. That seems to be one of the major impediments to understanding deep learning. So what people often did was they tried to scale the matrices to avoid this question, right? So what they often do is they initialize the weights randomly so that W is a random matrix where the elements W, I, J, are IID and Gaussian with a scale factor scaled precisely so that the largest eigenvalue of the Jacobian or the back propagation operator is one. OK? So that's like scaling the system so that if you place a random error vector here, the desired output minus the actual output, and back propagate it through a random network, a error vector will preserve its norm as it's back propagated across. And this is the famous sort of Glorot and Bengio initialization. And it works pretty well for depth four or five or whatever, right? OK. So we would like a theory of that for the learning dynamics of that. And as I said, there's no hope for a complete theory at the moment with arbitrary non-linearities. OK. So what we're going to do is we're to analyze the learning dynamics. Just-- we'll get rid of the non-linearities, right? So, again, it might seem like we're throwing the baby out with the bathwater, but we're actually going to learn something that helps us to train non-linear networks. OK. So the basic idea then is that we have a network which is linear. So y, the output, is a product of weights, right? OK. So then the back propagation's, again, just a product of matrices, right? The gradient dynamics is non-linear and coupled and non-convex. And actually even in this linear network, you see plateaus and sudden transitions, right? And actually, interestingly enough, even in this very deep linear network, you see faster conversions from pre-trained initial conditions, right? So basically, if you start from random Gaussian initial conditions, you get slow learning for a while and then sudden learning here, relatively sudden learning here. Whereas, if you're pre-train the network using greedy unsupervised learning, so this is the time it takes to pre-train, you get sudden learning and a drop here. So remember, if you go back to the original Hinton paper, this was the phenomenon that started deep learning. Greedy unsupervised pre-training allows you to rapidly train very, very deep neural networks. So the very empirical phenomenon that led to the genesis of deep learning was present already in deep linear neural networks, right? So deep linear neural networks, in terms of their expressive power, are crappy. Because the composition of linear operations is linear. They're not a good model for deep non-linear networks in terms of input-output mappings. But they're a surprisingly good model, theoretical toy model, for modeling the dynamics of learning in non-linear networks. OK? Because very important phenomena also arise in the deep linear networks. And we're focusing on learning dynamics here. OK. So we can build intuitions for the non-linear case by analyzing the linear case. OK? So we went through the three layer dynamics already. What about the multiple layer dynamics? So, again, the Jacobian can back propagate or explode, right? OK. So, again, I'm going to make a long story short. But what we find is that if you take-- OK, I'll tell you the final result. What we find is that we find a class of weight initializations that allow learning time to remain constant as the depth of the network goes to infinity. Now, I'm measuring learning time in units of learning epochs, right? So, obviously, to train a deep neural network, very, very deep neural network, it just takes longer to compute each gradient, right? So in terms of real time, of course, the time will scale with the depth of the network. But you might imagine in terms of number of gradient evaluations, as the network gets deeper and deeper, it might take longer and longer to train it. And we show a class of initial conditions for which that's not true. As the network gets deeper and deeper, the number of gradient evaluations you need to train the network can remain constant even as the depth goes to infinity even in a non-linear network. OK. So let me give you intuition for why. OK. So, for example, in the classical initialization, this Glorot and Bengio initialization doesn't have that. But our initialization does. So basically what we did was-- we'll start off with a linear networks. We trained deep linear networks on MNIST. And we scaled that depth like this, right? And we started with random Gaussian initial conditions and then ran back propagation, but scaled random Gaussian initializations. And we found that the training time, as you might expect, grew with depth. This is training time measured in number of learning epochs or number of gradient evaluations. But here what we did was we initialized the weights using random orthogonal weights, right? And then we found that the learning time didn't grow with depth. And also, if you pre-train it, it doesn't grow with depth. OK. So there's a dramatically different scaling and learning time between random Gaussian initialization and random orthogonal initialization. Why? OK? And the answer is the following. Let's think about the back propagation operator. Let's say you want to back propagate errors from the output to the input. So the back propagation operator in a linear network is just the product of weights throughout the entire network. OK? So if you do a random Gaussian weight initialization here, then this is a product of random Gaussian matrices. So to understand the statistical properties of back propagation, you need to understand the statistical properties of the singular value spectrum of random Gaussian matrices. There isn't really a general theory for that, but we can look at it numerically and get intuition for it. So the basic idea is if you have one random Gaussian matrix, the singular values of W are the eigenvalues of W transpose W. That's a famous distribution called the Marchenko-Pastur distribution. And you know, they vary in a range that's order one. OK? So if you back propagate through one layer, you're fine. You don't get vanishing exponent gradients. OK. But if you look at the singular values of a product of five random Gaussian matrices, the singular value spectrum gets very distorted. You've got a large number of similar values that are close to zero and a long tail that, you know, extends up to four. OK. But if you do 100 layers, you get a very, very large number of singular values that are close to zero and very much longer tail. OK? Now, this is a product of random Gaussian matrices. So if you feed a random vector into this, on average, it's norm will be preserved. The vector's length will not change. But we know that preserving the length of a vector is not the same as preserving angles between all pairs of vectors. OK. So actually, the way that this product of random Gaussian matrices preserves the norm of the gradient is it does it in a very anisotropic way. It takes an error vector at the output, and it projects it into a low dimensional space corresponding to the singular values that are large. And then it amplifies it in that space. So the length is preserved, but all error vectors get projected onto a low-dimensional space and amplified. So a lot of error information is lost in a product of random Gaussian matrices. OK. So that's why the Glorot and Bengio initial conditions work well up to five or six or seven, but they don't work well up to depth, say, 100 or in recurrent neural networks as well. OK? So what can we do? Well, a simple thing we can do is we can replace the matrices, these random matrices, with orthogonal matrices. OK? So we know that all the singular values of an orthogonal matrix are one, every single one. And the product of orthogonal matrices is orthogonal. So therefore, the back propagation operator has all of its singular values equal to one. And there's generalizations of orthogonal matrices to rectangular versions when the layers don't have the same number of neurons in each layer. OK? So this is fantastic. So this works really well for linear networks. OK. But how does this generalize to non-linear networks? Because then you have a product of Jacobians, right? So what happens here? OK. So what is the product of Jacobian? OK. So if we imagine how either errors back propagate to the front or how input perturbations back propagate to the end, it's the same thing. So it's easier to think about forward propagation. Imagine that you have an input and you perturb it slightly. How does the perturbation grow or decay? Well, what happens is there's a linear expansion or contraction due to W. And then this nominator is usually compressive. So there's a non-linear compression due the diagonal Jacobian passing through the point y's non-linearity. And then, again, linear modification and non-linear compression, linear modification, and non-linear compression. OK? So what we could do is just simply choose these again to be random orthogonal matrices. And then what happens is the growth or decay of perturbations-- and we scale the random orthogonal matrices by a scale factor to combat the non-linear compression. And then the dynamics of perturbations is like this. You rotate, linearly scale, non-linearly compress, rotate, linearly scale, non-linearly compress, and so on. That's essentially the type of dynamics that occurs in dynamically critical systems that are close to the edge of chaos. You get this alternating phase space expansion and compression that's in different dimensions at different times. OK? So now you can just compute numerically under that initialization. How does the singular value spectrum of the product of Jacobian scale? And it scales beautifully. So this is the scale factor for the type of non-linearity that we use, the hyperbolic tangent non-linearity. The optimal scale factor in front of the random orthogonal matrix is one. And you see when you choose that-- this was 100 layers, I believe-- even for 100 layers, that end to end Jacobian and from the input to output has a singular value spectrum that remains within the range of order one. If g is even slightly less than one, the singular values exponentially vanish with depth. If g is larger than one, the singular values grow, but actually not as quickly as you'd think. So this is the critically dynamical regime that at least preserves not only the norm of back propagated gradients, but all angles between pairs of gradients, right? So it's an isotropic preservation of error information from the end of the network all the way to the beginning. OK? So does it work? And it works better than other initializations even in non-linear networks. So we trained 30 layer non-linear networks. And the initialization works better. And so also, interestingly enough, at this critical factor you also achieve better generalization error. And we don't have a good theory for that actually. The test error and the training error, of course, goes down. OK. So that's an interesting situation where a theory of linear networks led to a practical training advantage from non-linear networks. OK. So here's another question that we had. OK? There's a whole world of convex optimization. We want our machine learning algorithms to correspond to convex optimization, so we can find the global minimum. And there are no local minima to impede us from finding the global minimum, right? That's conventional wisdom. Yet the deep neural network people ignore this conventional wisdom and train very, very deep neural networks and don't worry about the potential impediments to the local minima. They seem to find pretty good solutions why. OK? Is the intuition that local minima are really an impediment to non-linear non-convex optimization in high dimensional spaces really true? OK? And you might think that it's not true for the following intuitive reason, right? So, again, it's often thought that local minima, at some high level of error and training error, stand as a major impediment to non-convex optimization. And, you know, this is an example-- a two-dimensional caricature of a protein folding energy landscape. And it's very rough, so there's many, many local minima. And the global minima might be hard to find. And that's true. If you sort of draw random generic surfaces over low dimensions, those random surfaces will have many local minima. But, of course, our intuition about geometry derived from our experience with a low-dimensional world is woefully inadequate for thinking about geometry in high-dimensional spaces. So it turns out that random non-convex error functions over high-dimensional spaces, local minima are sort of exponentially rare in the dimensionality relative to saddle points. Just intuitively, imagine you have an error function over 1,000 variables, say 1,000 synaptic weights in a deep network. That's a small, deep network. But anyways, let's say there's a point at which the gradient and weight space vanishes. So now there's 1,000 directions in weight space you could move away from that extreme. What are the chances that every single direction you move has positive curvature, right? If it's a fairly generic landscape, the answer is exponentially small in the dimensionality. Some directions will have negative curvature. Some directions will have positive curvature, unless your critical point is already at the bottom. In that case, most directions will have positive curvature. Or unless your critical point is at the top higher, then most directions will have negative curvature, right? So statistical physicists have made this intuition very precise for random landscapes. And they've developed a theory for it. So this is a paper in Physical Review Letters by Bray and Dean. So what they did was they imagined just a random Gaussian error landscape. So what they did was they looked at an error landscape that's a continuous function over n dimensions, but there is correlations. It's correlated over some length scale. So it's a single draw from a random Gaussian process where the kernel of the Gaussian process is falling off with some length scale. So the error at 0.1 is correlated with the error at 0.2 over some length scale. And that correlation falls off smoothly. So it's a random smooth landscape. OK. So the correlations are local, essentially. And then what they did was they asked the following question. Let x be a critical point, a point where the gradient vanishes. OK? We can plot every single critical point in a two-dimensional feature space. What is that feature space? Well, the horizontal axis is the error level of the critical point. At how high on the error axis does this critical point sit? And then this f is the fraction of negative eigenvalues of the Hessian at that critical point. So it's the fraction of directions that curve downwards. OK. So now a priori, critical points could potentially set anywhere in this two-dimensional feature space, right? It turns out they don't. They concentrate on a monotonically increasing curve that looks like this. So the higher the error level of the critical point, the more the negative curvature directions you have. OK? And to be an order one distance away from this curve, the probability of that happening is exponentially small in the dimensionality of the problem. OK? Now, what does that mean? It automatically implies that there are no local minima at high error, or at least they're exponentially rare relative to saddle points of a given index. OK? So basically, you typically never encounter local minima at height error, right? That would be stuff that sits here. And there's nothing here. OK? Second, if you are a local minimum, which means on this axis you're at the bottom, then your error level must be very, very close to the global minimum. OK? So if you get stuck in a local minimum, you're already close in error to the global minimum. AUDIENCE: Can you repeat this last element? SURYA GANGULI: Yeah. So if you're a local minimum, your error level will be close to the error level the global minimum. AUDIENCE: Why? SURYA GANGULI: Because what does it mean to be a local minimum? It means that f is zero. The fraction of negative curvature eigenvalues of the Hessian is zero. And this is the distribution of error levels of such critical points. They're strongly peaked at this value, which is the value of the global minimum. Essentially, there's nothing out here. OK? All right, now in physics there is this well-known principle called universality. There are certain questions whose answers don't depend on the details. For example, certain critical exponents in the liquid-gas phase transition are exactly the same as critical exponents in the ferromagnetic phase transition. Because the symmetry and dimensionality of the order parameter density in the case of liquid and magnetization in the case of ferromagnets are the same. So there's certain questions whose answers don't depend on the detail. They only depend on the symmetry and dimensionality of the problem. So one might think that this qualitative prediction is true in just generic high-dimensional landscapes. Now, the computer scientists would say, no, no, no, no, no, no. Your random landscapes are a horrible model for our error landscapes of deep neural networks trained on MNIST and CIFAR-10 and so on and so forth. You're completely irrelevant to us, because we're doing something special. We're not doing something random. We have a lot of structure. OK. The physicists might counter, well, you know, you just have a high-dimensional problem. The basic intuition that in high dimensions it's very hard to have all directions curve up at a critical point high error should also hold true in your problem. OK? But, of course, we'll never get anywhere if we stop there, right? We have to move over into your land which is also my land and just simulate the system. So oftentimes, you know, biologists and computer scientists don't believe a theory until they see the simulation. So what we'll do is we'll search for critical points in the error landscape of deep neural networks. And that's what we did. So what we did was we used Newton's method to find critical points. So it turns out that Newton's method is attracted to saddles, right? So Newton's method will descend in the positive curvature direction. But it will ascend in the negative curvature direction. Because Newton's method is gradient descent multiplied by the inverse of the Hessian. So if the Hessian has a negative eigenvalue, you take a negative gradient and multiply it by the negative eigenvalue, and you turn back around and you go uphill. So Newton's method uncorrected is attracted to saddle points. OK? So what we did was we looked at the error landscape of deep neural networks trained on MNIST and CIFAR-10, and we just plotted the prediction of random landscape theory, right? And what we found was exactly qualitatively their prediction. We took each critical point and plot it in this two-dimensional feature space. And we found that the critical points concentrated on a monotonically increase in curve which, again, shows that there are no local minima at high error. And if your a local minimum, your error is close to at least the lowest error minimum that we found. We can't guarantee that the lowest error minimum we found is the global minimum. But qualitatively, this structure holds. OK? Now, the issue is what can we do about it. So what this is telling us-- that even in these problems of practical interest, saddle points might stand as the major impediment to optimization, right? Because saddle points can trap you. You know, you might go down here. And then there might be a very slowly curving negative curvature direction that might take you a while to escape. In fact, in the learning dynamics that I showed in these transitions in learning hierarchical structure, the thing controlling the transitions was the existence of saddle points in weight space of the linear neural network. And so the part of no learning corresponded to sort of falling down this direction slowly. And then the rapid learning corresponded-- eventually coming out this way. OK. So how do we do that? Well, what we can do is we can do a simple modification to Newton's method, which instead of dividing by the Hessian, we divide by the absolute value of the Hessian. And, again, I should say that this was done in collaboration with Yoshua Bengio's lab. And a set a fantastic graduate students in Yoshua Bengio's lab did all of this work on the training and testing of these predictions. OK. So what we suggested was, you know, the offending thing is dividing by negative eigenvalues. So just take the absolute value of the Hessian, which by definition I mean take the Hessian, compute its eigenvalues, and replace each negative one with its absolute value. OK? So that will obviously get repelled by saddles, all right? And that actually works really, really well. And there's a way to derive this algorithm in a way that makes sense, even far from saddles by minimizing a linear approximation to f within a trust region in which the linear and quadratic approximations agree. OK? So let me just show you first that it works. So this is the most dramatic plot. So basically what we did was we did stochastic gradient descent for a while. And then it seemed like the error as a function of training time plateaued both for a deep auto encoder and a recurrent neural network problem. So when the error as a function of training time plateaus, that's sort of interpreted as the fact that you're stuck in a local minimum, right? But actually, when we switched to this, what we call, the saddle-free Newton method, the error suddenly drops again. So this was an illusory signature of a local minimum. It was actually a saddle point with probably a very shallow negative curvature direction that was hard to escape. And when we switched to our algorithm, we could escape it. And, you know, what these curves show is that we do do better in the final training error as while. So now, how do we train deep neural networks with thousands of layers? And actually, how do we model complex probability distributions? So we want to sample from very, very complex probability distributions and do complex distributional learning, right? So this was done by a fantastic post-doc of mine, Jascha Sohl-Dickstein. So we were going to Berkeley for this non-equilibrium statistical mechanics meetings and things like that. And there's been lots of advances in non-equilibrium statistical mechanics where you can show that, roughly speaking, the second law of thermodynamics which says that things get more and more disordered with time can be transiently violated in small systems or short periods of time so you can spontaneously generate order. OK. So I'll just go through this, again, very quickly. So here's the basic idea. Let's say you have a complicated probability distribution. Let's just destroy it. Let's feed that probability distribution through diffusion to turn it into a simple distribution, maybe an isotropic Gaussian. And we keep a record of that destruction of structure. And then we try to reverse time in that process by using deep neural networks to reverse time and then essentially create structure from noise. And then you have a very, very simple way to sample from complex distributions if you can train the neural network, which is you just sample noise. And you feed it through a deterministic neural network. And that constitutes a sample from a complex distribution. And so this was inspired by recent results in non-equilibrium stat mech. So the basic idea, again, is let's imagine that you have a very complex distribution corresponding to this density of dye. You diffuse for a while. It becomes a simpler distribution. Eventually, they become uniform. You keep a [AUDIO OUT] Now, if you reverse process of diffusion, you'll never go from this back to this. But if you reverse process a neural network trained to do it, you might be able to do it. So that's the basic idea. So that's what we did. And I'll just show you some nice movies to show that it works. This is the classical toy model. We'll go to more complex models. This is a sample distribution in two-dimensional space. And so what we do is we just systematically have the points diffuse under Gaussian diffusion with a restoring force to the origin. So the stationary distribution of that destructive process is an isotropic Gaussian. OK? And that's what happened. So that's our training data. The entire movie is the training data. OK? Then what we do is we train a neural network to reverse time in that movie. So it's a neural network with many, many layers-- hundreds and hundreds of layers, right? So classically training a network with hundreds of layers, you have the credit assignment problem. Because you don't know what the intermediate neurons are supposed to do. You can circumvent the credit assignment problem, because each layer going up to the next layer just has to go from time t to time t minus 1 in the training data. So you have targets for all the intermediate layers. Therefore, you've circumvented the credit assignment problem. OK? So it's relatively easy to train such networks. And so once you have such a network, what should you be able to do? You should be able to feed that neural network an isotropic Gaussian, and then have that Gaussian be turned into the data distribution. So that's what happens. This on the right is a different Gaussian. And we just feed it through the trained deterministic neural network. And out pops the structure. It's not perfect. There are some data points that are over here. But this is roughly the distribution that it learned, which is similar to what it was trained on. OK? So now we can look at slightly more complicated distributions. OK. So that's that. So now we can train it on a toy model of natural images, right? So a classic toy model of natural images is the dead leaves model where the sampling process is you just throw down circles of different radii. So you get a complex model of natural images that has long range edges, occlusion, coherence over long length scales, and so on and so forth. So we can train the neural network on such distributions. We train it in a convolutional fashion by working on local image patches. And we convolve, so information will propagate over long ranges. And so, again, we take these natural images and turn them into noise, keep a record of that movie, and then reverse the flow of time using a deep neural network. OK. So once we train that, we should be able to turn noise into the networks best guess as to what a dead leaves model would look like. So this is what happens. It's taking noise, and it turns it into a gas. OK. So it's not a perfect model. But it turns out log probability of dead leaves under this generative model that consists of 1,000-layer deep neural network, that's higher than any other model so far. So this is currently state of the art. OK. And as you can see, it gets long-range coherence and sharp edges. And moreover, it gets long-range coherence in the orientation of [AUDIO OUT] often hard to do in generative models of natural images. OK. Now, we can actually do something somewhat practical with this is we can sample from the conditional. So then we also trained it on textures. OK? So, for example, textures of bark, right? And we can also sample from the conditional distribution. So what we can do is we can clamp the pixels outside of a certain range, replace the interior with [AUDIO OUT] make it blank. And because the network operates in a convolutional fashion, information from the boundary should propagate into the interior and fill it in, right? So if we look at that, so that's white noise. And the network is filling it in. And it fills in the best guess image. OK. And so, again, it's not identical to the original image, but it does get long-range edge structure, coherence in the orientation of the edge, and smooth structure as well. And, again, this is like 1,000-layer neural network. Now, there's some lessons here. OK? Oftentimes when we model complex data distributions, what we try to do is we try to create a stochastic process whose stationery distribution is the complex distribution, right? Now, if your distribution has multiple modes, you're going to run into a mixing problem, because it can take a stochastic process a long time to jump over energy barriers that separate the multiple modes. So you always have a mixing problem. And oftentimes when you train probabilistic models, you have to sample and then the samples to train the model. So that makes training take a long time. So what we're also doing in addition to circumventing the credit assignment problem and training very deep neural networks, we're circumventing the mixing problem in training the generative model. Because we're not trying to model the data distribution as a stationary distribution of a stochastic process. That would have to run for a very long time to get to the stationary distribution. We're demanding that during training the process get to the data distribution in a finite amount of time, right? So because during training we demand that we get to the data distribution a finite amount of time, we're circumventing the mixing problem during training. And that's the idea. That's [AUDIO OUT] an idea. There's lots of results now that show that you can attain information about stationary equilibrium distributions from non-equilibrium trajectories. OK. So now, I'm done. So let's see. OK. So there's that. OK. So, again, you can read about all of this stuff in this set of papers. Again, I'd like to thank my funding and just the key players. So Andrew Saxe, you know, did the work with me on non-linear learning dynamics and learning hierarchical category structure. Jascha Sohl-Dickstein did the work on deep learning using non-equilibrium thermodynamics. And the work on saddle points was a nice collaboration with Yoshua Bengio's lab. And again-- fantastic graduate students in Yoshua Bengio's lab. OK. So I think there's a lot more to do in terms of unifying neuroscience, machine learning, physics, math, statistics, all of that stuff. It'll keep us busy for the next century. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Alon_Baram_Laurie_Bayet_Learning_to_Recognize_Digits_and_Faces_from_Few_Examples.txt | [Music] my name is Laurie Bayon I'm a postdoc at the University of Rochester and Boston Children's Hospital working on developmental cognitive neuroscience my name is Alan and I am studying currently at Oxford I'm doing my PhD and their professor Tim burns and I'm currently working on computational cognitive neuroscience and I are trying to use a paper by tomato photo encoders on a specific way to achieve invariant recognition in computer vision order like algorithm so we're basically trying to implement this in a simpler simpler case and then moving on to face book cognition in under rotations the idea is that that most of variants in in computer vision when a government tries to discover what is in the image is held in very few manipulation like a translation which is a shifting image across the across the field or rotations or scaling so important as a cool idea of how to create this signature but Laurie just told about which is invariant to these things and might reduce the sample complexity so but how many examples you need to learn for a simple case we just used an existing data set of digits for the face data set we try to find a suitable data sets online but we ended up just taking videos of people using materials provided by the summer school so taking videos of people which I think their heads like this slowly I'm moving around a little bit we have now completely data set of the heads of people from different angles we want to provide the algorithm with a hopefully limited number of raw frames from P rotating their heads like this cuz templates so to speak and acting as like a kernel so to speak to be able than to recognize unseen people under various angles so that whatever a person is showing this profile this profile you would still be able to recognize it with the same level of accuracy that as if they were upfront about printing a product please the purpose of doing this project to be like in the long run what this I carry as Tony policy of course will be in the long run would be to reduce the number of examples that an algorithm for example deep neural Nets but now the number of examples we need to see in order to learn their weight in order to learn to to learn how to classify objects or which we classify images over three villages we haven't started the face part we're only senator the digits point which worked so yeah it's working basically we know it will also work in there endlessly more complex we approached a project from pretty much very different angles but so ended up having common interests which I guess it's kind of do it more markup this culture I don't know has I said it we're interested in the engineering problem so to speak so diehard cred we achieved this with machines and I came from the I approached the project from a developmental perspective so given that the current algorithms managed to do a varial face recognition based on you know every large fairly large number of exemplars how come that infants can achieve this in a few months based on you know a lot of experience but not that much mostly looking at their parents caregivers and a few other exemplars but not that not like three thousand people from all possible angles so this is why I was very interested in the story and trying to implement this manual use in pretty cool so far [Music] you |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_43_Aude_Oliva_Predicting_Visual_Memory.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. AUDE OLIVA: Thank you very much for the introduction. Good morning, everyone. So I'm very pleased to be here. It's the first time I visit. So I would like to give you a tour during this lecture about how you can predict human visual memory, and actually an interdisciplinary account of the methods you can use together in order to have basically a view of what people can remember or forget. So the specific question we are going to ask today is, well, we are experiencing and seeing all the time a lot of digital information. First you see in the real world, but also you are exposed to many, many videos and images. So vision and visual memory is one of the core concept of cognition. And the question we ask is, can we predict which image or graph or face or words or videos or piece of information or event is going to be memorable or forgettable for a group of people, and eventually for a given individual? So let's take a moment to imagine that we will be able to predict accurately the memory of people. Well, this will be very useful to understand the mechanism of human memory, both at the cognitive and neuroscience level, system level, as well as possibly diagnose memory problem, short-term visual memory, long-term visual memory, that may arise possibly in an acute way or developing over time, as well as design mnemonic aids in order to recall better informations. But beside basic science, if you could predict information that are memorable or forgettable, there is a realm of application that you could work on or basically propose that lie really everywhere between the data visualization or the slogan, make basically slogan better, as well as all the realm of education, the individual differences that may arise between people. Someone will not learn very well. Well, what can we do in order to increase the memory of visual information that this person can grasp? As well as various applications-- social networking, faces, retrieve better images, and so on. So understanding what make an information memorable or forgettable basically is really a very inter-disciplinary question, something very exciting for us to work on, because there's a lot, a lot of future in working in that topic. So this is a topic we started in my lab a few years ago. And the best for you to get a sense of how you start working, for instance, on memory is to do the kind of game and the experiment we had people doing. So welcome to the Visual Memory Game. A stream of images will be presented on the screen for one second each. And your task is to-- If you are in front of your computer you will press a key. But what you're going to do is play the game with me and clap your hand anytime you see any image that you saw before. So you're going to have to be attentive, because images, they go by fast in this rapid stream of images. And so you will be getting feedback. So it's a very straightforward memory experiment. And this is the first step in order to get some score on a lot of images regarding the type of information that people will naturally forget or naturally remember. So let's do the game. So are you ready? All right. So clap your hand whenever you see a repeat. So this is what will rerun. Very simple images. [CLAPPING] Fine. Excellent. So images are shown one second. There's a one-second interval. [CLAPPING] You're good. No false alarm. Excellent. All right. So that was one of the level, level 9, out of 30 complete. And so here's the game that people had. They could play this game for five minutes, had a break, and come back. And this game had a lot of success, and we run it. I'm going to show many results about this. So you could see your score, the amount of money you have done. This was run on Amazon Mechanical Turk. And this allow us to collect all around the world a lot of data regarding many, many images. So those visual memory experiment were set up by Phillip Isola from MIT. And you can basically play that game for any kind of information displayed visually. So we did it for pictures, faces, and words. And you're going to see the result. So let's start with the pictures. In the first experiment, we presented 10,000 images, and about 2,200 we repeated many, many time. So those were the one where we collected the score. So for a given subject, those images were actually seen only once. Twice, sorry. So you have the stream of images at the top. So, for instance, an image will be shown after 90, 100, 110 images or so. And if this image was one that the subject recognized, then he will press a key. So it's exactly the design you just did. And when first you look at the type of images that are highly memorable or forgettable, well, there's a trend that we all expect, is images who are kind of either funny or have something distinctive or something different, or if people are doing various action, or of some object that are kind of out of context, those tend to be memorable. And, in general, landscape or images that don't have any activity tend to be forgettable. So we have those that score for more than 2,000 images from that experiment. So one of the first thing we need to know is, well, everyone is playing that game, and we can have their own memory score. But in order to know if some images are indeed memorable for all of us or a good amount of the population, we need to see if there is consistency between people. So here is a simple measure we do. You have a group of people looking at those images, and we have the memory score. And you can split the group into two, and rank according to the average of the first group, the images, from the most memorable to the less memorable. And then you can also rank the same images in the second group. And if the two group were identical to each other, you will get a correlation between those two ranking of 1. And what we observe when we split repetitively the group into two like this is we observe a correlation of 0.75, which is pretty high. And this give us basically the maximum performances that we can expect when we have a group of people that can predict the rank of images of another group of people. And actually, here is the curve that shows what the 0.75 consistency look like. So on the x-axis, you have the image rank according to the group number one, so the images with the highest memory score. So I have a group of them that are above 90. And it's normal. The group number one in blue decrease as images are less and less memorable. So that's like basically your ground truth curve. And the green show the group number two, totally independent people, and also the performances that they got for each bins along the image rank. And you can see the curve are pretty close by. So a correlation of 0.75 look like this. So it means that, with independent group of people, there are some images that are going to be systematically more memorable or forgettable. You can see the full range going below 40% for the images that were forgotten, up to 95 for the one that were systematically remembered. But, importantly, a group of person is going to predict another group of person. So there's several ways to test memory. The way we tested memory here to get ground truth was an objective measurement. You see an image again, and if you remember it, you press a key. So that's an objective measurement. But you can also ask people, do you think you will remember an image? Do you think someone else will remember an image? We also run those subjective memory score. And we observed this very interesting trend that the subjective judgment do not predict image memorability, which means that, if you ask yourself, am I going to remember this? Well, basically, maybe, maybe not. So subjective judgment of what you think your memory will be or what you think the memory of someone else will be is not correlated with the true memory, with whatever you're going to remember or not. So this was very interesting, because it shows that objective measurement should be needed here in order to really get a sense of what people will remember or forget. We basically have many papers on this topic since 2010. They are all on the web site. And then some of them will look at the correlation existing between memory, so the fact that you're going to remember certain kind of images, and other attributes, for instance, aesthetic. And, again, we found that memorability is distinct from image aesthetic. This means that basically you could have an image that is judged very beautiful, or, on the contrary, ugly or boring. And in those cases, you will still remember those two images. So we found this absence of a correlation between those two attribute in our values studies. We replicate this with other data set and faces as well. So it looks like what you will remember is this notion of distinctiveness, but it can be beautiful or ugly. It doesn't matter. You can still either remember it or forget it. So you had this question about the notion of the lag. So the lag is in that you could test visual memory after a few seconds or a few intervening images, or you could test it a few minutes or even one hour later, or even days later. So because we were running those experiments on Amazon Mechanical Turk, we did not do the dates. However, we did run some with a larger gap, up to 1,000 different images between the first and the second repeat. And so here is the design, the one that I show you for the about 100 images intervening between the first and the second repeat. But what about a shorter and a longer time scale? So all that work is also published. You can go and download the paper and see the details. But the basic idea is that the ranks were conserved. So if one image is very memorable, one of the top after, let's say, a few seconds, will still be memorable after hours. And if an image is forgettable or one of the most forgettable after a few seconds, will still be forgettable after hours. So the fact that the magnitude, the percentage of images remembered decrease is normal. That is known from memory research for decades. So it is expected. However, memorability here is basically the rank, which is an image, is for a population independently basically of the row, the magnitude. One of the most memorable given this condition is forgettable. And we did those experiment, both on the web as well as in the lab, because in the lab we could control for more factors. And it was very interesting to see in the lab experiment that only after 20 second there were images that were totally forgotten by a good group, a good amount of people. So some image seems to really-- basically do not stick and be gone in I don't know how many second. We did not go really to short-term memory. We work starting in long-term memory at 20 seconds and so on. But there's this phenomenon. So it suggests that there are some features into the visual information that are encoded in less details than others or with more details than others. And what's very interesting is then you can go to neuroscience and basically start studying the level of details or the quality of encoding of an image and see where in the visual pathway an image basically is gone after 20 seconds or something like this. So important point-- the rank is conserved. So we also look at those principle of memorability that we found for images in faces. So faces is a very interesting material to work with, because basically it's all images that look alike. So you have basically one object and look at many, many exemplars. And there's no reason to believe that this high consistency we will find for images as different as amphitheater and the parking and so on, and landscape, this high consistency will be found with faces. So we gather a data set of 10,000 faces. The paper is published, as well as the entire data set is available on the web. So you can go and download those 10,000 faces as well as all the attribute that we found where we study with the data set. And we also found the same phenomenon, very high consistency, both in the correct positive responses, when people remember seeing a face, as well as in the false alarm, when people did not see a face but falsely thought that they saw a face and basically pressed the key. So this very high consistency for both measurement suggests that, again, in the facial features of people, or at least the way a photo is taken, there is something at the level of the image that will make a face highly memorable or highly forgettable for most people. So all the details of this study are actually on the web. It was a pretty complex study to run, because while we have very different sensitivity to faces, so the race effect on basically where we grew up, so we know there's a lot of individual differences. So the way this study was run is the collection of faces we have followed the US census in terms of male, female, race, and age. We started at 18 years old and older. So did our population as well. So on the group, we show a collection of faces that did match the population, the people who were running the study. And, as I say, all the data are available on the web if some of you want to go back and do additional analysis. So we kept going with this. So we have the consistency in the visual material. So now what about words? So words is a very interesting case, because now you do know the words. But are there some kind of words that we can predict are more forgettable or memorable? Again, there's no reason to believe a high consistency will be found. But we've run the study twice with two different data set, again, ten thousand item, and two different set of words. And we found, again, very high consistency, which is that a collection of words were systematically remembered by people and others were forgettable. So this work that is done in collaboration with Mahowald, Isola, Gibson, and Fedorenko is under submission. But let me give you a taste of the words that we found. So what make words more memorable or forgettable? So here is a cartoon that give you the basic idea is, well, if there is one word for one meaning, so basically a word has a single meaning, it will have a tendency to be much more memorable than if a words has many meanings. So in the paper, we also look at the correlation between memorability and image ability or frequency and so on. And all this is describe, but really the main factor is this one-to-one referent between a word and its meaning or concept. So let's look at some of the example. The paper will come with the two data set and thousand of words that were found memorable and forgettable. So I think if you write a letter, you should not say that your student is excellent but that she is fabulous. And our research is not a blast. It's in vogue. And the idea of a team is not irrational, they are grotesque. So those are just a few example of the words that on average, of the three first, had more referent and the tendency to be more forgettable because they might be used for many more things than the one. I also notice a lot of the French words tend to be memorable. So we do find this stable principle that you can predict the content, can predict what type of images, faces, words that are memorable. And we also did it for visualization, starting working on the topic of education. So this is very useful, because at least at the level of a group, you can start making that prediction. Oh, I also have massive and avalanche. Forgot about it. So now that we were able to have all those data and see that there is this consistency, then one of the next question you can look at is, OK, well, if it seems that we all have a tendency to remember the same image, can we find a neural signature into the human brain? So the question of memorability, is it a perceptual or memory question? Because in all our experiment, the images are shown for a short time. And then, when they are repeated, you see them a second time. But basically, all the action is at the perception level. Whenever you perceive this image for half a second or one second, there is something going on here at the perception level that is going to bias if this image is going to basically go into memory or not. So, knowing this, if we want to look at the potential neural framework of memorability, we have to look at the entire brain. We have to look at all the region that have been found to be related to perception, faces perception, picture perception, object, space, and so on, as well as the medial temporal lobe region, more in the middle of the brain, that have been related to memory. So this is what we did with Wilma Bainbridge. This is her PhD, basically having a look at all those region. And here is the very simple experiment we run. So we took a collection of faces and scene from the thousands we have, and where every one, we'll basically separate those two between looking at the region that are more activated for seeing versus the region more activated for faces. We split it that way, between the memorable and the forgettable set. So in those set, every images is novel. So exactly like in the memory experiment, we could show them one time for half a second. So you had the perception level. You're in a perception experiment. You saw those image one after the other only one time. All those images are novel. So we're going to look at the contrast from novel image minus novel image, from scene minus scene, from faces minus faces, except that some images are highly memorable and some are highly forgettable. So other factor that you have to look at is, it's still possible that within those group of images and faces that are highly memorable or forgettable, that there's a lot of image features that basically correlate with those. So if you take a collection of images like I shown you before, and did the environment or the photo that have people or action tend to be memorable versus a landscape tend to be forgettable? So here you have a lot of visual features that will co-vary with the dimension of memorability. So in that study for the brain, we equalize for that, because we had enough images. So here you have a sample of the two groups and sample of images and the type of statistic we look at. And the two, for instance, for this scene were equalized in term of the type of category you had-- outdoor, indoor, beach, landscape, house, kitchen, and so on-- as well as a collection of low-level features. And you can see some of the average signature that are actually identical on a lot of low to mid to higher level image features that were equalized between the two group. So whatever we find is not going to be due to simple statistic due to the image or the type of object those have. We could play the same game for the faces, so we did. Here are the numbers again, memorable and forgettable faces that were also equalized for various attribute like attractiveness, emotion, kindness, happiness, and so on, as well as male, female, race, as well as expression, and so on. And you can see that the statistic, the average faces for both the memorable and the forgettable group, are actually also identical. So with those group, what's left is hopefully only the factor of something else in the image at the level of a higher image statistic, because only image statistic will explain the fact that very different people will remember the same faces and forget the same faces. But certainly not some of obvious low-level image statistic. Those cannot explain any result. So two years and four study later, we replicated basically this study four times with many different matter. Then, I'm just going to show you here one snapshot of the result. So this is a Multi-variate Pattern Analysis looking at the memorable versus the forgettable groups, searchlight analysis, MVPA looking for the region of the brain that are more active. Well, that have a different pattern. They are also more active, but at a different pattern for memorable faces. And we find signature in the hippocampus, the parahippocampal area, as well as the perirhinal that are typical for memorable faces and scene, and a new signature in the visual area or even the higher visual area, because we did equalize for those. So it seems to show that those MTL region play a role in a kind of higher-order statistical perception, a notion of distinctiveness that is within those image. So this suggest that at the perception of a new image, could be a face or a scene or a collection of object and so on, well, there's already a signature of this that's going to guide or bias if this is going to be put into memory or not. And we have those at the level of the group, looking now at the level of individuals. All right. So, well, it looks like we're going, given your question is now trying to model this notion of memorability. So we have a good case that there is some information into the image of a higher-level status-- we don't know which one-- that cannot be explained by simple features that make all of us reacting the same way. Even our brain do react the same way. We also have images, signature of memorability coming up. So we have this intrinsic information that make all of us remember or forget the same kind of visual information. So can we now in a way imitate or model those result into an artificial system? All right. So you have all heard about the revolution in computer vision a couple of years ago and deep learning and those neural networks that are now able to outperform a lot of-- that were able to recognize and perform a lot of task, some of them at the level of humans, so recognizing various object and so on. And one of the aspect of those neural networks, and I'm going to talk about them, is that they require a lot of information. So you need to teach them the classes you want to distinguish. And they can and need a lot, a lot of data. So everything we did so far, we are getting that on a couple of thousand images. And that's really not enough to even start scratching computational modeling. So with Aditya Khosla, we run recently a new large-scale visual memorability study on Amazon Mechanical Turk, but this time getting score for 60,000 photograph. And you have a set of a sample over here. 60,000. They are all going to be available in a few weeks. The paper is under revision. It's looking good. So as soon as we have a citation, we are going to give away all the data as well as the score and the images and so on. And in this experiment, images were presented for 600 milliseconds or shorter time, but they really did not change much. So as a snapshot, because the 60,000 images really cover a lot of type of photo-- faces, event, action, and even a graph and so on-- well, you either hear some that were highly memorable or forgettable. There's also the website. It's not populated yet with many things, but it will be very shortly. And the correlation we got on that data set was pretty high again, 0.68. That was expected. And the paper explain the various split we did. But, again, we find this very high correlation. So we do know that there's something to model there. And other-- again, a very quick summary. The type of images that seems to be the most memorable are the one which have a focus and close settings, that show some dynamics and something distinctive, unusual, a little different, whereas the less memorable one seems to have no single focus, distant view, static, and more commonalities. All photos, you can still find two images that will be focus and dynamics, and one of them will be more memorable than the other because it will have something unusual that now our system can capture. So we have this new data set. We have all the memory score. We have the high consistency. How do we even start thinking about the computational model of visual memory or memorability? Well, in order to give you a sense of one of the basic we needs, in order to even start thinking of a model, I'm going to show and run another demo. So in this demo, you're also going to see some images. And clap your hand whenever you see an image that repeat. OK? Exactly the same game than before. Ready? All right. If everyone play the game, it will be fun. OK. [CLAPPING] [CLAPPING] A little false alarm. [CLAPPING] False alarm. [CLAPPING] Good. More energy. [CLAPPING] Good. [CLAPPING] Sorry. [CLAPPING] No [LAUGHTER] [CLAPPING] No. [CLAPPING] Yes. [CLAPPING] No. AUDIENCE: Too close. [CLAPPING] AUDE OLIVA: No. Yeah, that was-- yes. [CLAPPING] [LAUGHTER] All right. So for the sake of the demo, I put here really images that are very different, some you are familiar with. You have a concept. You know what it is. This is a restaurant. This is an alley. This is a stadium, and so on. And some for which either you don't have a specific concept, or you have the same concept-- texture, paintings, texture, texture, texture. And the basic idea of memory is you need to recognize, to put a unique tag or collection of tag in order to remember that individual image. So the fact that you saw a collection of texture or paintings or whatever you want to call them, you'll remember that as a group. But to go to the individual memory of one, you're going to need to have a specific concept. And in order to remember it, this is going to be an abstraction, a format, a collection of words, or a coding that make it unique. So you need to recognize to remember, which means that if you want a model of memory which start from the raw image-- I'm not in toy world here. I really start from the raw image, like the retina. Well, you're going to need to build a visual recognition system first. So first you need a model that recognize object and scene and event and so on. And then, from this, there can be a base to start modeling memory. So, fortunately, the field of computer vision made a lot of progress in the past couple of years. So now we do have visual recognition system that works pretty well. I'm going to describe them. We need to first a visual recognition system. All right. So, what does a visual recognition system needs to do? So, well, it's your Sunday morning. You're going to the picnic area, and you're faced with that view. You take a photo. This photo actually became viral on the web. And here is the state of the art of computer vision system. When it comes to recognize the object-- I know it's a different view, but it works very well for any view-- object recognition for about 1,000 object category is reaching human performances so far. And so this will tell you this is a black bear, there's a bench, a table, and so on, and trees in the background. But it's missing the point, that this is a picnic area. So you do need at least two kind of information in order to reach visual scene understanding. You need the scene and the context, you need to know the place, and you need to know the object. So, as I said, so far on the challenge that's called the ImageNet Challenge in computer vision, computer vision model average human performances, which is 95% correct on exemplars of objects that have never been trained on, a new one, for 1,000 object category. And recently, we basically published a few papers for the other part of visual scene understanding, the place and the context. And this is an output of our system. And you can go on the web-- I'm going to give you the address-- and play with it and see the performances of recognizing the context and the place. So just to put into context what the field of computer vision have been doing for the past 15 years is, well, the number of data set, the number of images by data set has been increasing, so that there's more exemplar to learn from. And, in perspective, you can see that two-years-old kids. Of course, it will depend on the sampling you use in order to have an estimate of the number of visual information that the retina sees. But it sees much, much more variety and numbers of visual input. But right now, both ImageNet, that is, a data set of objects, and Places, which is a data set of scene I'm going to present to you now, have about 10 million label images of many categories. So label means that for the places, it will tell you, this is a kitchen or this is a conference room, and so on for hundred of categories. So 10 million is largely enough to start building very serious visual recognition system, but it's no near the human brain. However, we might be getting there. So how do we even start to build a visual scene recognition data set? This is a work we did and published in 2010, the Scene Understanding, or SUN data set, where we collected the words from the dictionary that correspond to places at the subordinate level-- I'm going to give you example-- and then retrieve a lot of images from the web. And there was a total of 900 different categories, and about 400 of them have enough exemplars to build a artificial system. So instead of going and only looking to build images, like to build a data set of a bedroom and kitchen and so on, what's happening for the human brain is that you have a different environment. You see there's many attribute you can put in. And you are forthcoming and storytelling and so on. So most of the places, it's not only that this is, for instance, a bedroom, is that you will go more to a student bedroom or-- Well, I think those are student bedroom two doors from my colleague. So there are many type of adjective that we can use in order to retrieve images that will give us a larger panorama of the type of concept that are used by the human brain in order to recognize environment. So a simple bedroom. The tag were put automatically. Superior bedroom. Senior bedroom. Colorful bedroom. Hotel bedroom. And so on and so on. So that was the retrieval we did, which means that now, for every category there is also a tag in term of the subtype of environment this can be. Messy bedroom. And a couple of years later, 80 million images later, and a lot of Amazon Mechanical Turk experiment, we are launching this week the Places2 data set, with 460 categories, different categories of environment, and 10 million images label. So this is a larger data set of label images, with a label to be used right away for artificial system learning, deep learning, and so on. So here is just a snapshot of the differences of the places in term of the number of exemplars with other a large data set. So the Places data set is actually part of the ImageNet challenge this year, which means that you can go to ImageNet and register for the challenge and download right now, tonight, eight million images of places to use for the learning of your neural networks, as well as of a set that will be used for the testing, and participate to the challenge. So this was launch last week, and the website associated with Places will be launched this week. And we decided to just give this away to everyone right away. We are finishing up the paper now. It will be an archive paper. No time to wait for month and month. This is a data set that can be used by a lot of people to make progress fast. And so that's what we are doing. So, as I said, the computer vision model now require, if you use deep learning, a lot of data. And we hope that with this data set, fast progress are going to be made. So what we specifically did is using the AlexNet deep learning architecture-- If you don't know what this is, I can tell you later how to basically access to it with the paper on so on. This is not my model. This is a model put together by Geoffrey Hinton and collaborator a few years ago. And you can download the model or download the code and re-train. So neural net now basically are based on the collection of operation that are call layers, convolution, normalization, simple image processing operation that you do in a sequence. You do it one time, then a second and third and so on. And then you have those multi-layers models. And the number of layers is still a question of research. How does layer correspond to the brain? I'm going to say a little bit about that. And using this simple-- well, this AlexNet model, we built a scene recognition system. And now you can go to places.csail.mit.edu with a smartphone will work, will take a photo. And it should tell you the type of environment the photo represent. It will give you several possibilities, because basically environment are ambiguous. They can be of different type. So I don't know if you can read. Let me read a few. The first one, it says, restaurant, coffee shop, cafeteria, food court, restaurant patio. I guess they all fit. The second one, parking lot and driveway. The third one, conference room, dining room, banquet hall, classroom. That was a difficult one. And the fourth environment is patio, restaurant patio, or restaurant. So if you go there, you can also give feedback if one of the label match the environment that you're looking at. And it should be above 80% correct. And this model use 1.5 million images and 200 categories. So soon with a great-- we hope that things will be even more interesting and accurate. And I took this morning a couple of photo at breakfast. So you may all recognize the scenery here. So from the breakfast area looking outside, it's an outdoor harbor, dock, boat deck. Yeah, it could be actually on a boat deck looking at the harbor. And otherwise, the breakfast area was restaurant, cafeteria, coffee shop, food court, or bar. All those, again, fits. So those model now works very, very well. So why? Well, let me tell you why. So those neural networks, you can go to any layers and open them and look at what every single artificial neuron do. So what we call the receptive field of every single unit in the layer one, the layer two, the layer three, and so on. So the first the layer-- here, four layers are shown-- basically learn simple features, contours and simple texture. That's called pool1. And those looks like the type of responses of the visual cells, and possibly V1, V2. I'm going to tell more about this. And as you go higher up in the layers, then you start having some texture, some patches, that make more sense. And higher up in the layers, like layer number five, then you start having artificial receptive field that are specific to part of object, part of scene, or an entire object by itself. Like we can see here some kind of [INAUDIBLE] coffee, as well as the tower and so on. So it seems that the system are able to recognize environment of object. But what they learn are the part and the object that the environment contain. And I'm going to show example of those. So jumping, just giving you a result in neuroscience. And so there's a lot of debate out there about, OK, well, there's those model with different layers. And you have different models out there. To which extent they correspond to the visual hierarchy of the human brain? Well, first, the computational model were inspire by the visual hierarchy, the V1, V2, V4, and [INAUDIBLE] and so on, knowing that more complex features are built over time and space. So what you can also do is run through a network and run in an fMRI experiment the same image, and then look at the correlation existing between the responses of the cells, let's say in layer one, and the responses you may have on the human brain in different part of the brain. And what we find is that the layer one will correspond more, will have a higher correlation, with responses in the visual area, literally V1, V2. And as you move up through the layers, then there is correlation, higher [INAUDIBLE] correlation, with part of the ventral and the parietal part of the brain. And I know you had the lecture by Jim DiCarlo that actually must have explained this. So Jim DiCarlo team did it, Jack Gallant as well in Berkeley. And we also did it with other type of images and other network, and all the result really corroborate each other with this nice correlation between low to high visual areas between the brain, the human brain, and those multi-layers model. All right. Let me show you some of the receptive field, the artificial receptive field that we find in the higher layers. So this network was trained for scene categorization. So the only thing that this network, the one we are using here, learn was to discriminate between, in this case, 200 categories-- the kitchen, the bedroom, the bathroom, the alley, the living room, the forest, and so on and so on. 200 of those. So that's the task. What the network learn and what we observe is that object discriminant and diagnostical information between those category form the emerging representation that are automatically, naturally learn by the networks. So the network has never learned a wheel, but the representation emerge, as you can see here. So this is one artificial neurone. And it is receptive field and its responses, the highest response it got for a collection of images. And as you can see, those higher pool5 receptive field are more independent to the location. They are built that way. But the network never learn the parts. This is something that emerged naturally by learning different environment. The other thing that's very interesting in this model, when you can open up and look at the receptive field using various method, here is one, is, well, this model has never learned the notion of shape or object. So it's going to basically become sensitive to discriminant and diagnostic information. So you have this unit that is discriminant to the bottom part of either the legs of animate, or even you can see the trees over there. So this is a unit. It seems that it was needed in order to classify environment to have units that we might not have a word for it. But the human brain might as well have many of those unit that do not correspond to necessarily a word. So, basically, with this model you can have a lot of new object emerging that you might not have thought of, but they become parts of the code needed in order to identify an environment or an object. So we also have another unit for this bottom parts of a collection of chairs. We also have chair, of course, showing up. Faces. The model never learned faces. Only learn kitchen, bathroom, street. We have several unit emerging for faces. Why? Because those are correlated with a collection of environment. Then, entire object shapes, like bed, very diagnostic of a bedroom in this case. So that will be a unit that is very, very specific. That would be really only for beds, that one. And then, others like lamps and so on. There's thousand and thousand here. Another unit never learned, screen monitors. And here is specific unit for that that emerge. Also, collection of object or space, collection of chairs over here. The network found that this is a discriminant information to classify environment, crowds. It's a very interesting unit because it's really independent of the location, as well as basically the number of people and if they are closer or further away. But it does capture this notion of crowd. So the model doesn't have the word crowd. So it has never known the crowd. It's one of the unit emerging that now can be used as an object detector to enhance the recognition of what's going on in the scene. So this is an ice skating area, and there is a crowd. And also unit that are more specific to space and useful for navigation, for instance. We have many of those. So in that case, just the fact that there is lamps up or perspective, so we have a unit like this specific to this. So it's not the only object as physical object. There's also a collection of unit that are related to spatial layout and geometry that are also discriminant for environment. And many, many other that are showing up. So those object detector naturally emerge inside this kind of network trained for scene understanding. So now I only have a couple of minutes to wrap up, 20 minutes, so I'm going to just give you the hint of the next part. So with the Places challenge that is starting this week, certainly in less than a year the computational vision model of scene recognition are going to be very close to human performances. And then there's a long way to go and many more things to match, like the error. So know that the error look alike or when a category can have many type of object or what can happen next. So you can really expand. But let's say we can consider now that we have a base of visual recognition into a model that works pretty well at the level of human, or close enough, or will be close enough. So, now that we have that, we can add the memory module. How to add the memorability module is really open in the air. There's many ways to do. So we just did it one way, to have a ground very first model of visual memorability at the level of human. And this is going to be out-- the paper is in revision-- in a few weeks. And you're going to be able to download the images, the model, and so on. Again, this is model number one, and we hope that then a better model can be done. So we went for the Occam razor approach. The most simple one given the model is we took AlexNet, we feed AlexNet with both ImageNet and Places. Because all the images that are memorable or unforgettable, they might have object, they might have places. OK, so let's put the two together so we have more power. We train the model, so we have the visual recognition model. We do know that those units make sense. They recognize that this is a kitchen. And we do know why, because we have the parts. So that's a classical standard AlexNet. The output is scene and object categorization, so we are still in categorization land. And we can remove the last layer. And then this is a procedure that has been well published in computer vision and computer science, this notion of fine tuning and back propagation. So you use the network on learning that are learn to recognize places, and you finely tune and adopt the feature so that the task has change. So the task was recognition. And now, for the network, at the end there is a new task for that network that has learn all those object and scene. And the new task is now to learn that those element are of high, medium, or low memorability, which is a continuous value. So by doing this, we have now a model where we give it a new image, and it's going to output a score between 0 and 1. And the human to human I'll show you is 0.68 correlation. The human to computer is 0.65, which mean now that there is a first model that can basically replicate human memorability of a group nearly at the level of a human. And we use the data set of 60,000 image to finely tune the recognition model, as well as test with images that this has not shown. And because we can open up this new network of memorability and look also now at the receptive field of the neuron that are related to high or low memorability image-- and we will publish every single receptive field on the web as well with this paper-- and thus now we can see the unit that are related, this higher level information of object or space or parts and so on that is related with images that are highly memorable in green, strong positive, or highly forgettable. And we find, again, that if you have animate object or kind of object, roundish object, and so on, you can go [INAUDIBLE] those will make your images more memorable. And so our last part is, so now that we do have this model that spill out responses at the level of human and indicate the parts that are related to higher memory, lower memory, at least as a guideline, then we can go back to a given image and emphasize the part that correspond to the high memorability receptive field and de-emphasize the part corresponding to the forgettable part of the receptive field. And this give you images on the right like that that have been weighted by the element that are memorable and the element that are forgettable. So maybe here it doesn't matter that the ground is forgettable. Here, the part I like to emphasize are the memorable one. But, for instance, in this imagine we have two person. And, well, she just happened to be more forgettable in this case, which will make a perfect CIA agent, if we think about it. And we did tested those. So then, for a values part of navigation, I mean values scene, we do find that the element of exit or entrance, so where there's basically path and a 3D structure for navigation, tend also to be more memorable. So those are highlighted in those images. Another example, where the kids will be more memorable than the features of the person. And I don't-- Well, I'm not an expert in game. You can explain that one to me. So I have to stop because we need a break, and I might need to talk. However, here is a vision of where we are going. And maybe other people will be interested to go on this adventure. Because it's really, really just the beginning. But if you now can have a model that at the level of a group of human predict which image are memorable, and as well, also match part of the visual region and even the higher level object recognition, then you can start having this similarity going on in this study between the human brain and all the part, from perception to memory and computational model, to characterize what the computation underlying those particular region. Because now you have model zero. I'm not saying that the model I'm showing to you is a model that imitate the brain. But it's a model zero, and now it can be tuned to better learn the calculation that are done at different level along the visual to the memory hierarchy in order to characterize visual memory. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_61_Nancy_Kanwisher_Introduction_to_Social_Intelligence.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. NANCY KANWISHER: I'll just be brief today, but you can check out some of my stuff at the website up there. If you're confused by my appearance, if you've met me before, yes, I used to look like that. But at a deeper level, really, I look like this. This is me, and you look like that, too, inside. And these are parts of-- is that showing? Yeah. These are parts of my brain that we've mapped with functional MRI that were either discovered in my lab, or that my colleagues discovered, and that we then ran-- they're kinds of scans in my lab. These are all regions that do very specific things and that to me, are a big part of the story of how we are so smart. So my interests, at a very general level, are to answer things like what is the architecture of the human mind? What are its fundamental components? And there are lots and lots of ways to find those fundamental components. Functional MRI, which is how we made this picture, is just one of a huge number. I loved Patrick's comment that you should find questions, not hammers. I kind of like my hammer, I have to confess. But questions are more important. And there are lots of ways to approach this question of the basic architecture of the human mind. I also want to know how this structure, which is present in every normal person-- I could pop any of you in the scanner and make a picture like this of your brain, OK, it would take a little while, but wouldn't take that long. How does that structure arise over development? How does your genetic code and your experience work together to wire that up when you're an infant and a child? How did it evolve over human evolution? This is sort of what's sometimes called a mesoscale, this really macroscopic picture of the major components of the human mind and brain. But of course, we also want to know how each of those bits work. What are the representations that live in each of those regions? And how are they computed? And what are the neural circuits that implement those computations? And of course, cognition doesn't happen in just one little machine in there. It's a product of all of these bits working together. We want to understand how all of that works, too, and how all of that goes together to make us so smart. And that's related to a question that I'm deeply interested in, which is what is so special about this machine that looks a lot like a rodent brain? And it's smaller than a whale brain or a Neanderthal brain, so it's not just that we have more of it. What is so special about this thing that has put all of us here, interacting with each other and studying this thing, something that no other brain is doing, no other species brain? So are there special bits? Do those bits work differently? Are there special kinds of neurons? I don't think so, some people do. What is it about this that has brought us all right here? OK, so that, at a top level, are some of the questions I would most like to answer. Not that I know how to approach any of them, but I think it's important to keep an eye on those goals, even when you don't quite see how you're going to get there. My particular focus in the CBMM Project is to look at social intelligence, which is one piece of that puzzle. And so, why social intelligence? Well, just briefly, I think social cognition is in many ways the crux of human intelligence. OK, and it's a crux in a whole bunch of different senses. One is it's just the source of how we're so smart. Like, if you think about all the stuff you know, OK, do a quick mental inventory. OK, what's all the stuff you know? Like, make a little taxonomy. There's this kind of stuff, it's all lots of different kinds of stuff you know. OK, now how much of that stuff that you know would you know if you had never interacted with another person? A lot of it, you wouldn't know, right? So a lot of the stuff we know and a lot of the ways that we're smart are things that we get from interacting with other people. That's social cognition. OK. Another sense in which social cognition is the crux of human intelligence is many people think that the primary driver of the evolution of the human brain has been the requirement to interact with other people who are, after all, very complex entities, and to be able to understand how to work with them, and what they're doing, and what they'll do next is very cognitively demanding. And so that may be one of the major forces that has driven the evolution of our brain. Another sense in which social intelligence is the crux of human intelligence is that it's just plain a large percent of human cognition. OK, so we do versions of social cognition much of every day. Right now, I'm having these thoughts in my head. God knows what that looks like neurally. I'm translating that into some noises that are coming out of my mouth, you're hearing those noises, and you're getting-- let's hope-- kind of similar thoughts in your head. That is a miracle. Nobody has the foggiest idea how that works at a neural level. Nobody can even make up a sketch of a hypothesis of a bunch of neural circuits that might be able to make that happen. Right? That's a fascinating puzzle, and it's also of the essence in human intelligence. And we do it all the time, not just speaking per se, but all the other ways that we share information with each other. So one, social cognition is just what we do all day long every day. It's also a big part of the surface area of the cortex. So this cartoon here shows-- with some major poetic license-- brain regions that are involved in different aspects of social cognition. And it's just a big part of the cortical area as well. OK. Another sense in which social cognition is of the essence in human intelligence is that many of the greatest things that humanity has accomplished are products of people working together. So all of that is the big picture on why social cognition is cool, and important, and fundamental. The part of it that we're focusing on in our thrust within this NSF grant is something I call social perception. OK, so by social perception, I mean this spectacularly impressive human ability to extract rich, multidimensional social information from a brief glimpse of a scene. From a brief glimpse at a person, you can tell not just who that person is, you can tell what they're trying to do. You can tell how they feel. You can tell what they're paying attention to. You can tell what they know and who they like. OK? And that's just the beginning. OK? So the work in our thrust tries to approach all of these different kinds of questions that we are calling as part of our PR of this NSF grant. It's kind of an organizing principle. The Turing questions, these demanding, difficult computational problems of social perception. Who is that person? What are they paying attention to? What are they feeling? What are they like? Are they interacting with somebody? What is the nature of that interaction? And so on. OK? So the general plan of action in how to approach this in our thrust is first to study these abilities in the computational system that's best at them, namely this one-- and those out there, yours, too-- the human brain. And so the roadmap here is to first do psychophysics, characterize simple behavioral measurements-- what can people do, what can't they do-- from simple stimuli, and quantify that in detail. Ask, how good are we at it? Maybe some of these things that we think we can do, like size up somebody's personality in three seconds when we first meet them-- feels like you can do that, or at least you get a read on them-- I mean, is that based on anything? Is that just garbage? Right? Are we actually tapping into real information there? What cues are we using when we make those high level social inferences? What is the input that we get, that we use as a basis for analyzing this particular percept or throughout life that we've used to train up our brains to be able to do this? So the second approach is once we have some kind of sense of what are those abilities-- that's sometimes called Marr theory level, characterizing what can we do, right-- is we can then try to computationally model this. And so there's lots of different ways to do this, and many of the other thrusts that you'll hear about are really tackling that problem. Another thing we can do is, of course, characterize the brain basis of these abilities, and we can do that with all kinds of methods. We're using, in our thrust, functional MRI, intracranial recordings, something called NIRS. This is the ability to make measurements of blood flow changes in very young infants. And so we can characterize these brain systems in adults and infants. And that gives you a leg up in understanding these other broader questions about how the whole system works in a number of different ways. Just seeing how the brain carves up the problem of social perception into pieces already gives you some clues about the kinds of computations that may go on in each of those pieces. OK? OK. So that's the overview. There's many, many ways you can do this, and of course, people all over the place are doing this. There's nothing all that unique about it. This is just our framework here. Some of the specific projects that are going on include some work on face recognition, which of course, a really classic question that many people have been approaching. My post-doc, Matt Peterson, here has done some very lovely work where he's shown that, actually, where you look on a face is very systematic. You don't just look anywhere, right? When you first make us saccade into a face, somebody appears in your visual periphery, right, of course, all the high-resolution visual abilities are all right near the center of gaze around the fovea, where you have a high density of photo receptors and a shitload of cortex-- to be technical about it-- devoted to allocating center of gaze. Right back here, in primary visual cortex and with the first few retinotopic regions, you have 20 square centimeters-- that's like that-- of cortex allocated to just the central two degrees of vision. Right? So you have a lot of computational machinery doing just that bit right there. Well, when a face appears in your periphery, you move that bit of your cortex, boom, right on top of it. So you have all that computational machinery to dig in on the face, right? OK, so what Matt has shown is that the particular way that you allocate that computational machinery, namely by making an eye movement to put that stimulus right on your fovea, people do that slightly differently. Some people fixate on a face up here, some people fixate on a face down there, and most people fixate someplace in the middle. OK? Well, so why is it interesting? Here's why it's interesting. People do that in very systematic ways. And if you look up here, you pretty much always look up there. And if you look down there, you pretty much always look down there. And this has computational consequences. If we brought you guys into the lab and ran you on an eye tracker for 15 minutes, we'd find out which of you look up there and which of you look down there. And if we took those of you who look up here, and we presented a face by flashing it briefly while you're fixating so that the face landed in your not-preferred looking position, your accuracy at recognizing that face would be much lower, and vice versa. If you're one of the people who looks down there, and we flash up a face so that it lands right there on your retina, you're much worse at recognizing it. And what that means is that this fundamental problem that you'll hear about in the course, that Tommy has worked at in many people, it's one of the central problems in vision research of how we deal with the many different ways an object-- the many different kinds of images an object can make on our retina by where it lands on the retina, how close it is to you, the orientation, the lighting, all these things that create this central problem in vision of the variable ways an object can look. A big part of how we solve that for face recognition is we just move our eyes to the same place. Position and variance problem solved, mostly. OK, it's kind of a low-tech solution. It's a good one. OK, anyway, so Matt has been working on that for a while, and so now, most of that is lab studies. Now what he's done is he's using mobile eye trackers, which look like this, and a GoPro attached to his head, because the mobile eye trackers don't have very good image resolution. And so he's sending people around in the world, and he's finding that, first of all, yes, in fact, when you're walking around in the world-- not just when you're on a bike bar in a lab, you know, with a tracker and a screen-- the people who look up here also look up there in the world, right? So that's just a reality check that shows that our technology is working. And now Matt is using this to ask all kinds of questions. For example, social interactions, where do people look in social interactions? Can you tell stuff about what they think about each other based on where they look on faces, right? We want to run-- this is fruity. We haven't set it up yet, but we want to run speed dating experiments in the lab with people wearing eye trackers. I bet in the first few fixation positions, you can tell who's going to want to recontact who. I don't know. We haven't done that yet. OK, that's a little trashy, but it's kind of interesting. Some interesting scientific questions are a little bit trashy, you know. Some trashy questions are not scientifically interesting. I think that's one of those rare that's actually both. Anyway. We also want to characterize-- a whole other part of this is this question that people have been considering for a few decades now of natural image statistics, right? So people have done all this stuff, collecting images, and at first, they did it really low-tech, and then the web appeared. And it's like, oh, now there's a lot of images out there, and we can just collect them easily. And let's characterize them. What are natural images like? So it's a whole set of math where people have looked at those natural images, and characterized them, and tried to ask how the statistical properties of natural images have-- how we have adjusted our visual systems to deal with the images that we confront. And that's a cool and important area of research. But in all of that work, nobody's actually used real natural images, right? The images on the web, somebody stuck a camera and put it there, and then they threw away most of the pictures they took. The ones that land on the web are the ones that have good resolution, where people weren't moving in and out of frame, things weren't occluded. They're not at all like the actual images that land on your retina. So we're collecting the actual images that land on your retina. And we're doing it with mobile eye trackers, sending people around in the world using these nice GoPro systems to give us high resolution. And importantly, not only are we collecting real natural image statistics from these real natural images, we know, for each frame, where the person was looking. And that's important for the reason I mentioned a while ago, that most of your high-resolution information is right at the center of gaze. And the information out in the periphery is pretty lousy. OK, so that's one project that I described too long, so I'll whip through the others more briefly. We want to know how well people can read each other's direction of attention. OK, so when I'm lecturing now, if you guys get bored and look at the clock, I will see it, right? And that's just one of these things, you know? We're very attuned to where each other are looking, and that's very useful information. You meet somebody at a conference, and you see them make a saccade down to your name tag, and it's like, damn it, doesn't this person remember who I am? You know? I'm very aware of this because I'm mildly prosopagnosic. So if I've met you before, and I'm slow to register, don't take it personally. I'm just lousy. It takes me a long time to encode a face. Anyway, we're very attuned at where each other are looking. And so there's been a lot of work on how precisely we can tell whether somebody is looking right at you versus off to the side. Try this at lunch. When you're in the middle of a conversation with somebody, fixate on just the side of their face, not way off to the side, just like here, and just do that for a few seconds. It's deeply weird. The person you're talking to will detect it immediately, will feel uncomfortable, until they realize what you're doing, and then you guys will have a good laugh. And that will show you how exquisitely precise your ability to read another person's gaze is. It's really very precisely tuned. OK. So there's a lot of work on that, but there's less work on how well I can tell what exactly you're looking at if it's not me. That is, I can tell if you're looking at me or off to the side, or this side, or that side. But what we're looking at is how well can I tell what object you're looking at? And that's an important question because many people have pointed out that a central little microcosm, kind of a unit of social interaction, is something called joint attention. And joint attention is when you're looking at this thing, and I'm looking at it, and I know you're looking at it, and you know I'm looking at it. That's a cosmic little thing. Like, we can have this little moment, right? Joint attention, OK? And people have argued that that's of the essence in children learning language. It's of the essence in all kinds of social interactions. And by most accounts, no other species has it, not even chimps. OK? I mean, there's still some debate about this, and people niggle and stuff, but basically, they don't have it in anything like the way we have it. So we want to know, what is the acuity of joint attention? OK, so I was supposed to do that briefly. I can't seem to be brief. OK. So that's a whole project that's going on with Danny Harari and Tao Gao. We're also asking how well people can predict the target of another person's action, right? So if I go out to reach this, at one point-- well, there's only one thing there-- but if we had a whole array of things, at one point when I'm reaching for an object, can you extrapolate my trajectory, look at my eye gaze, and use all of those cues to figure out what is the goal of my action? Here's a cool way to look at how well people can predict each other's actions. This is work by Maryam Vaziri-Pashkam, shown here, who's a post-doc at Harvard working with Ken Nakayama, who will give a lecture later in the course. And what they're trying to do is get an online read of how well people can predict each other's actions. And so obviously, this happens in all kinds of situations, especially in sports, right? If you're playing basketball or ultimate frisbee, it's all about predicting who's going to go where when and trying to take that into account with your actions. So they've set this up in the lab. And they have a piece of glass here, and there's two Post-its on this piece of glass. And one person's task is to reach out and touch one of those targets quickly. And the other person who's the goalie watches them through the glass and tries to touch that target as soon as possible after the first one does. OK? And so it's just a basic little game. And so they have little sensors on each person's finger so they can track the exact trajectories and get reaction times. They're just behavioral measurements, but they're very cool. So what they find first of all is that the goalie, the person who's trying to reach to respond to the other person, can do that extremely fast, right? They launch their hand to the correct target within 150 milliseconds. Well, you should immediately realize that something's fishy. You can't do that. It takes about 100 milliseconds just to get to V1. It takes, I forget how long, but a few tens of milliseconds to send the signal out from your brain out your arm to initiate the movement. So how could you possibly do all of that in that time? Well, you can't. And what that means is that people are actually launching the hand action, the goalie's launching the action before the other person has actually started moving their finger. They've started processing it before. And the way they've done that is before this person starts, before their hand moves at all, they've subtly changed their body configuration in ways that the other person can read. OK? Now, on the one hand, OK, duh. You're playing this game. You learn to exploit cues. We're really great at figuring out cues quickly, and using them, and learning to use them. But here's the-- one second-- here's the cool thing about this task is that this immediate, ultrafast reaction time happens on the very first few trials. So the ability that this task is tapping into is not that the goalie can learn what cues are predictive given enough trials and feedback. No, they do it right off the bat. This task is tapping into an ability that we all have already, right now, to read each other's actions and predict each other's behavior. And so people with no instruction and no experience whatsoever in this novel task know that this subtle little cue of the way the body is moving a little bit before the person's finger even starts to move, they can tell what it's predictive of. So that's just another way to characterize people's abilities in social perceptions, so one of some of the many different things that we just see really well in other people's actions. OK, that's what I just said, all right. All right. I'm going to skip over some stuff. We're looking at perception of emotional expressions. Almost the entire literature is based on staged emotional expressions on faces, huge literature with neuroimaging and behavior, and it goes back forever. But my colleague Elinor McKone has pointed out that actually, it would be important to look at real emotional expressions on faces. Maybe that's different behaviorally. It turns out it's very different behaviorally. One, you can tell if somebody's faking an emotional expression or if it's a real one. Like, OK, which of these is real fear, and which of these is staged fear? Duh! OK, so one, we're really attuned to that. I think that's really interesting. Just as a social perceptual ability, we spend a lot of time trying to figure out who's sincere, who's genuine, who's faking something, what's for real, right? You know? There's all kinds of shades of that. And here's one little piece of it, right? So I think that's very interesting. And they've shown that behaviorally, these phenomenon are very different. Just one example. A prior literature had shown that people with schizophrenia are particularly bad at reading facial expressions, using the standard measures, a standard stimuli, the Ekman six facial expressions. These guys replicated that finding and then showed that when you run the same experiment, but using not staged but real emotional expressions, schizophrenics are better than everyone else. OK, so it matters behaviorally, and it's interesting. OK. All right. Other things that we're doing-- right. Leyla, your TA here, who's done beautiful work on her thesis work with Tommy using MEG and other methods, is now working with me and Gabriel, using some of this magnificent data that Gabriel has collected over a bunch of years, where he's got intracranial recordings from human brains while people watch movies. This is so precious. These data are like a dream to me, as somebody who's been using functional MRI as my main hammer for the last 15 years. Functional MRI is magnificent, it's wonderful, it's fun, but it has fundamental limits. One, it has no time information worth a damn. And the computations that make up perception, including social perception, and language processing, and most of the interesting aspects of cognition, happen on the order of tens of milliseconds. We can't see any of that. It's all just squashed together like a pancake, right, with functional MRI. With intracranial recordings, you have exquisite time information, and you can see computations unfold over time. That's very precious. Second of all, in principle, with intracranial electrodes, you can test causality, something you can't do with functional MRI. You can stimulate and ask what tasks are disrupted. All right? So there's a huge number of cool things you can do with intracranial recordings. Leyla is looking at some of the data that Gabriel has been collecting, with intracranial recordings of people watching movies. And because these are rich, complex social stimuli, she's going to look at all kinds of things that we can try to extract from those data. Like, can you tell the identity of the person who's on the screen right now? Can you tell from their face, their voice, their body? Can you tell what action they're carrying out? Can you tell if the person on the screen right now is a good guy or a bad guy? Right? Can you tell what kind of social interactions are going on? So we know all of this stuff, all this information is extracted in the brain, because people are good at it. But to get a handle on the actual neural basis of how we carry out those perceptual processes, this will be a really cool tool. So that project is just starting now. And in other projects going on, Lindsey Powell, shown here, who's working with Rebecca Saxe, and Liz Spelke, and others, is using this NIRS method to look at blood flow changes in response to neural activity in infant brains. She's looking at some of those specializations that I showed you in my brain at the beginning and asking, which of those are present in infancy, a totally cool question. And Ben Deen, and Rebecca Saxe, and me, and a bunch of others are looking at a big chunk of the human brain that was one of my colored patches before. This whole dark gray region here is called the superior temporal sulcus. This is an inflated picture of the brain. That means-- usually, the cortexes are all folded up inside the head. You have to do that to fit it in there. But if you want to see the whole thing, you can mathematically inflate it. So that's what's happened here. And the dark bits are the bits that were inside of folds before it was inflated. So they're inside a sulcus, but now shown blown out to the surface. So this superior temporal sulcus running down here is one of the longest sulci in the human brain and one of the coolest. And an awful lot of social perception goes on right there. Ben Deen has a paper in press and some ongoing work where he shows that lots of different kinds of social, cognitive, and perceptual abilities actually inhabit somewhat distinct regions along the superior temporal sulcus. They're not perfectly discrete. Nothing is a neat little oval in the brain. Actually, they somewhat overlap, but there's a lot of organization in there. And that's cool because it gives us a lever to try to understand this whole big space of cognition. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_31_Liz_Spelke_Cognition_in_Infancy_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ELIZABETH SPELKE: I want to start with an observation about this summer school. There's a lot of development in this summer school. You've got two full mornings devoted to it-- today and on Thursday. It also came up pretty majorly in Josh Tenenbaum's class last Friday and I learned early this morning also in Shimon Ullman's class that I couldn't be here for yesterday afternoon. And the issues have come up in many other classes as well, including Nancy's, Winrich Freiwald's, and so forth. Now, what's come up is not only the general questions about development, but specific questions about human cognitive development. Questions that have been addressed primarily through behavioral experiments, not experiments using neural methods or computational models. And the topic that I'm going to be trying to-- that Allie and I will try to get you to think about for this morning is even narrower than that. It's about the cognitive capacities of human infants. And I think a fair initial question would be, why so much focus on early human development? And that question will get sharper if you look at where major organizations are putting their research money. They are not putting it into the kind of work that I'm going to be talking about today. There is no-- in the Obama BRAIN Initiative, where they're looking for new technologies, there's no call for new technologies to figure out what human knowledge is like at or near the initial state and how it grows over the course of infancy. And the European Human Brain Project doesn't have development as a major area in it, either. So I think it's fair to ask, why is CBMM taking such a different approach and putting so much emphasis on trying to get you guys to think about and learn about human development? And two general reasons, I think. One is, it's intrinsically fascinating. Come on. We are the most cognitively interesting creatures on the planet. And we're extremely flexible. At the very least, we know that a human infant can grow up to be a competent adult in any human culture of the world today and any human culture that existed in prehistory. And that means extremely varied-- they've had to learn extremely different things under different circumstances and have succeeded at doing that. We also know that by the time they start school, if they go to school at all, the really hard work of developing a common-sense understanding of the world is done. That is, it's not explicitly taught to children. Most of it isn't even very strongly implicitly taught to them in the form of other people trying to get them to learn things. What you're trying to do when you have a young kid, as those of you who have them know, or have had them know, is you're trying to get them not to climb off cliffs or explore the hot pots on the stove and so forth. You're really not spending very much of your time trying to get them to learn new stuff. They're doing that on their own. So it's I think a really interesting question, how do we do that? Intrinsically interesting in its own right, even if it were of no other use to us. But historically it's also been recognized as being really important for efforts to understand the human mind, understand the human brain, and build intelligent machines. So Helmholtz, who came up in Eero's talk last night, was not only a brilliant neurophysiologist and a physicist, he was extremely interested in perception and cognition. And he wrote about fundamental questions about human perceptual knowledge and experience. How is it that we experience the world as three-dimensional? He concluded that we didn't know the answer and never could know the answer, unless we could find ways to do systematic experiments on infants of the sort that could already be done to reveal mechanisms of color vision, for example, as were described last night on adults-- systematic psychophysical experiments on infants. But he looked at infants and said, I don't see any way to do that. We can't train them to make psychophysical judgments and so forth. But he was aware of their centrality. So was Turing, who in thinking ahead to how one might build intelligent machines, suggested that one aim to build a machine that could learn about the world the way children do. And a side of the work that's come up so many times in the whole Hubel-Wiesel tradition that started in the late '50s, I think one of the most exciting and important developments within that field, we're not just focusing on the response properties of neurons in mature visual systems, but rather on the development of those neurons and the effects of experience on them. When you discover that you get these gorgeous stripes of monocularly-driven cells in V1, it then immediately became really interesting to ask, suppose an animal were only looking at the world through one eye? Or suppose they could look at the world through the two eyes, but not at the same time, or not at the same things at the same time? What would happen to those cells? And there was gorgeous work addressing those questions from the beginning. Now, that work has somewhat receded from attention. I think that's a mistake. I think that there's a great deal to be learned from those kinds of studies now. And if I get nothing else across over this time, I hope you'll at least get the idea that this is a field worth following, looking at development in humans, looking at development of perceptual and cognitive capacities in animal models of human intelligence as well. So more specifically, I think there are three questions about human cognition for which studies of early development in general and in human infants in particular can shed light on. Two of them I'm not going to really be talking about today, except indirectly. One is the question, what distinguishes us from other animals? We come into the world with very similar equipment. But look what we do with it. We create these utterly different systems of knowledge that no other animal seems to share. What is it about us that sets us on a different path from other animals? That's question one. And the other question I won't talk about is-- well, I'll talk about it a tiny bit, but not directly-- is, where do abstract ideas come from? It seems like we not only develop systems of knowledge, but those systems center on concepts that refer to things that could never in principle be seen or acted on. Like the concept "seven," or the concept "triangle," or the concept "belief," or ethical concepts and so forth. Abstract concepts organize our knowledge. But since they can't be seen or touched or produced through our actions, how do we come to know them? I think studies of early development can shed light on that as well. But the question I want to focus on today is the third question, and it's the one that Josh raised on Friday. How do we get so much from so little as adults? As adults, you look at one of the photographs he showed of just an ordinary scene and you can immediately make predictions about, if you were to bang it, what would happen? What would fall? What would roll? We seem to get this very, very rich knowledge from this very, very limited body of information at any given time. And what that suggests is that we are able to bring to bear in interpreting that scene a whole body of knowledge that we already have about the world and how it behaves. But that raises the question, what is it that we know and how is our knowledge organized? What aspects of the world do we represent most fundamentally? Which of our concepts are most important to us and generate the other concepts and so forth? How can we carve human knowledge at its joints? And now this can be studied in adults and you've seen a number of examples of this. You saw it in Nancy's talk last Tuesday, right? Anyway, last week sometime. Studies using functional brain imaging to get at our representations of human faces. You saw it in Josh's talk. He was mostly using data from adults to be probing the knowledge of intuitive physics that he was focused on and that his computational models are trying to capture. You're going to see it on Thursday in-- no, tomorrow in Rebecca Saxe's talk, where she'll talk about human adults' attributions of beliefs and desires and other mental states to people. It's certainly studyable in adults, but it's difficult to answer these questions. It's difficult to answer these questions in any creature, but I think it's especially difficult to answer these questions in adults for a couple of reasons. One is that our knowledge is simply too rich. By the time we get to be adults, we know so much and we have so many alternative ways of solving any particular problem, that it's a real challenge to try to sift through all our abilities and figure out what the really fundamental, most fundamental concepts that we have are. And the second problem with adults is we not only know too much, we're too flexible. We can essentially relate anything to anything. We can use information from the face to answer all sorts of questions about the world. And here, I think, infants are useful for a maybe seemingly paradoxical reason. They're much less cognitively capable. They know much less about the world and they're far less flexible-- I'll show you examples of this-- far less flexible in the kinds of things that they can do with the knowledge that they do have. Nevertheless, they seem to come into the world equipped with knowledge that supports later learning. And because it's supporting later learning, it's being preserved over that learning. It's being incorporated in all of the later things that we learn. So it remains fundamental to us as adults. And I think this can help us, to think about how our own knowledge of the world is organized. OK. So that's a general overview. How do we study infants? Now here's where the tables turn radically. We have way better methods for studying cognition in adults than we do in infants, just as Helmholtz thought. They can't talk to us. They don't understand us when we talk to them, so we can't give them structure. Oh, and unlike willing trained animals, you can't train them to do things, at least not in any extended sense. They can't do much. I'm most interested in infants in the first four months of life before they even start reaching for things, much less sitting up by themselves or moving around. The interesting thing is, from day one, from the moment that they're born, they're observing the world. They're looking at things and they're getting information from what they see. Now, their observations-- we've learned over the last half century or so that their observations are systematic and they're reflected in very simple exploratory behaviors, like when a sound happens somewhere in the visual field, turning the head and orienting it toward the sound. Even newborn infants will do that. Or if something new or interesting is presented, infants will tend to look at it. And these behaviors I think can tell us something about what infants perceive and know. And before getting to the real substance of what I want to focus on today, let me give you a few examples of this. What kinds of things do infants look at? Well, if you present even a newborn infant-- infants at any age, really-- with two displays side by side, and vary properties of those displays and the relation between them, you'll see that they look at some things more than others. So they'll look at black-and-white stripes more than they'll look at a homogeneous gray field. That's useful. It allowed people to get initial measures of the development of visual acuity which infants-- it actually overturned a somewhat popular view that at birth, infants couldn't see at all. We know from these simple studies that they can. And we also know that their acuity starts out very low but gets pretty good by the time they're four to six months of age. It doesn't reach full adult levels until about two years. We also know that they look at moving arrays more than stationary arrays, and they look at three-dimensional objects more than two-dimensional objects. In addition to having intrinsic preferences between different things, they also have a preference for looking at displays that change or displays that present something new. So jumping from the '50s when those first studies were done up to the '80s, there was a whole flurry of studies showing babies pairs of cats on a series of trials and then switching to a cat and a dog. And the babies would look longer-- these are three-month-olds-- would look longer at the dog than at a new example of a cat. So they're able to orient to novelty. And they also look longer at a visual array that connects in some way to something they can hear. Now, one of the things I spend a lot of my time studying is foundations of mathematics-- numerical and spatial cognition in infants. I'm not going to talk about it at all today. But I kind of couldn't resist giving just one example of looking at what you hear that connects to infant sensitivity to a number. This is a study that was conducted in France by Veronique Izard and her colleagues with newborn infants in a maternity hospital. She played infants sequences of sounds, and each sequence involved repetitions of a syllable. For half the infants, each syllable appeared four times. For the others, it appeared 12 times. And for the ones for which it appeared four times, each syllable was three times as long. So the total duration of a sequence was the same for the two groups, but one involved four syllables and one involved 12. And after they heard that for a minute, the sound continued to play and now she showed, side by side, an array of four objects versus an array of 12 objects. And the babies tended to look at the array that corresponded in number to what they were hearing. Now, all of this gives us something to work with, but it raises a nasty problem. And the problem is, what are babies perceiving or understanding? Today we're not going to be asking, how can babies classify things? What do they respond to similarly? What do they respond to differently? We're going to be asking, what sense do they make of them? What are they representing? What the content of the representations that they're forming in each of these cases? And these studies as I've just described them don't tell us. Let's take the case of the sphere versus the disk. When this study was first conducted, the author concluded that babies have depth perception, that they perceive three-dimensional solid objects. Is that a justifiable conclusion? AUDIENCE: Not necessarily. ELIZABETH SPELKE: Why not? AUDIENCE: Because they are not [INAUDIBLE].. ELIZABETH SPELKE: Yeah. OK. So when you present things that differ in depth, you're presenting a host of different visual features that for us as adults are cues to depth. The question is, are they cues to depth for the infant? And the fact that the infant is looking longer at something we would call a sphere than at something we would call a disk, doesn't tell us whether they're looking longer because they're thinking, "sphere," or "3D," or "solid," or something like that, or whether they're looking longer because they're seeing a more interesting pattern of motion as they move their head around, or because as they converge on one part of the array they're getting interesting differences in how in-focus different parts of it are, and so forth. All of the different cues to depth could-- what we want to know is, what's the basis of this preference? And the existence of the preference doesn't tell us that. Similarly for the cats, and similarly for this single isolated experiment that I gave you on number, right? Does this say anything whatsoever about number, or could there be some sensory variable where there's just more going on in a stream of 12 sounds and there's more going on in an array of 12 objects, and babies are matching more with more, independently of number? These studies in themselves don't tell us. In order to find out, what we need to do is take these methods and do systematic experiments. And these experiments work best under the following conditions. When you're studying a function that exists in adults and whose properties have been explored in adults in detail systematically, when you have a body of psychophysical data that you can rest on in your understanding of what's happening in adults, and you can then apply that to infants. So one example of that took as its point of-- this is work by Richard Held, a wonderful perception psychologist who worked at MIT. Still is active, actually. He's retired but still active. And he did these beautiful experiments that started with the sphere-versus-disk phenomenon. And first of all, he tried to take it apart and say, let's just focus on one cue today, OK? Binocular disparity at the basis of stereo vision. So he put stereo goggles on babies. These were babies ranging in age up to about from birth to about four months, I think. He put stereo goggles on them and showed them, side by side, two arrays of stripes. In one of the arrays, the same image went to both eyes. In the other arrays, the edges of the stripes were offset in a way that leads an adult to see them as organized in depth-- some stripes in front of others. And he showed that infants looked longer at the array with the disparity-specified differences in depth than the array where it didn't. He did not conclude from that that they have depth perception, but it gave him a basis for doing a whole series of experiments that asked, in effect, do you see this effect under all and only the conditions in which adults have functional stereopsis? So he showed, for example, that if you rotate the array sideways 45 degrees so that you still have double images on the stereo side, but we wouldn't see depth because our eyes are side by side, not one above the other, the effect goes away. He varied the degree of disparity and showed that you only get this preference within this narrow range where we have functional stereopsis. And he was able to show the striking continuity between all of the properties of stereo vision in adults and in these infants. So that study and a bunch of others using other methods, I think have resolved this question of whether depth-- when depth perception begins. Its beginning very early. Stereopsis comes in around two to three months of age. Other depth cues come in at birth. It's beginning very, very early. But it didn't come from single experiments. It came from systematic patterns of experiments. In the case of cats versus dogs, we don't really have a psychophysics of cat perception, but steps have been taken to try to get to what the basis is of infants' distinction between dogs and cats in those experiments. And interestingly, what's popped out are faces. Turns out, you can occlude the cat and the dog's whole bodies, and if you leave their faces, you get these effects. If you occlude their faces and leave their bodies, you mostly do not, unless you cheat and give other obvious features, like all the dogs are standing and all the cats are sitting, or something like that. But in the normal case, faces are coming out as an important ingredient of that distinction. In the case of abstract number, there's also a lot of work in adults on our ability to apprehend at a glance approximate numerical value of sounds in a sequence or visual arrays. We've learned a lot about the conditions under which we can do that and the conditions under which we can't. That's not my topic for today, but Izard and her collaborators have been testing for all of those conditions in newborn infants. And so far, so good. It looks like there is a similar alignment between the patterns of-- the factors that influence infants' responses in those studies where they hear sounds and see arrays of objects and the factors that influence our abilities to apprehend approximate number. OK. So this gives us some good news and some bad news. The good news is that I think questions about the content of infants' perception and understanding of the world can be addressed. The bad news is that we can't do it very fast. You can't do it with a single silver-bullet experiment. You have to do it with a long and extensive pattern of research. In the past, research on infants has gone extremely slowly. Basically, the methods that we have allow you to ask each baby who comes into the lab maybe one, or if you're lucky, a couple of questions, but not more than that. So it takes a long time to do a single experiment. I do think, though, that this work is poised to accelerate dramatically and that we're poised to-- this is a good time to be thinking about infant cognition because I think we're soon going to be in a different world, where we can start asking these questions at a much more rapid pace. That's for at least two reasons, both of which, by the way, are being fostered by the Center for Brains, Minds and Machines and undertaken by people who are part of that center. One is, there are now efforts underway to be able to test infants on the web. These basic simple behavioral studies, you can assess looking time using the webcam in an iPad or a laptop, and you can test babies that way. And there's attempts to do that, which would make it possible to collect data doing the same kinds of experiments that have been done in the past, but much more quickly. Two, as Nancy already mentioned and Rebecca may talk about tomorrow, there are efforts underway to use functional brain imaging to get at not only what infants look at, but what regions of the brain are activated when they look at those things, which will give us a more specific signal of what infants are attending to and processing, someday, hopefully, in the near future. And we just had a retreat of CBMM, where there was a lot of brainstorming about new technologies to try to get more than just simple looking time out of young babies. So maybe some of that will work as well. But what I want to focus on today is that even this slow, plodding research has gone on for long enough at this point that I think we've learned something about what infants perceive and what they know. And I tried to put what I think we learned into two slides. Here's the first one. I think that very early in development, baby in the newborn period, but anyway, before babies are starting to reach for things and move around on their own, they already have a set of functioning cognitive systems, each specific to a different domain. One is a system for representing objects and their motions, collisions, and other interactions. Another is a system for representing people as agents who act on objects, and in doing so, pursue goals and cause changes in the world. A third is a system for perceiving people as social beings who can communicate with, engage with other social beings and share mental states. And then three other systems that I won't talk about today. One system of number, which I think is being tapped in that first Izard experiment. And two systems capturing aspects of geometry, one supporting navigation of the sort that Matt Wilson studies, the other supporting visual form perception of the sort that IT and occipital cortex represent. I think each of these systems operates as a whole. In Josh's terms from last Friday, it's internally compositional. Infants don't just come equipped with a set of local facts about how objects behave, they come equipped with a set of more general rules or principles that allow them to deal with objects in novel situations and make productive inferences about their interactions and behavior. Each of these systems is partially distinct from the other systems. It's distinct in three ways. First, each of them operates on different information. It's elicited under different conditions. Second, it gives rise to different representations with different content. And third, most deeply, it answers different questions. So for example, we have two-- infants have two systems for reasoning about people, but each system is answering a different question. The agent system is answering the question, what is this guy's goal? What is he trying to accomplish? What changes is he affecting in the world? The social system is asking, who is this guy related to? Who is he connected to? Who is he communicating with? Each of the systems are limited, extremely limited relative to what we find in adults. Each captures only a tiny part of what we as adults know about objects or agents or social interactions. Each of them, I think, interestingly, is shared by other animals. I didn't expect that to be true when we started doing this research. But as far as we can see so far, it's hard to find anything that a young human infant can do that a non-human animal can't. And I'll give you examples of that, too. And finally-- and I won't talk about this much, unfortunately. I think each of these systems continues to function throughout life and supports the development of new systems of knowledge. So when we think thoughts that only humans think, we engage these fundamental systems that we've had since infancy and other animals share. I also think this research tells us something about how we do that. I think that in addition to having these basic early developing systems, we have a uniquely human capacity to productively combine information across these systems, and through those combinations, to construct new concepts. I think these new concepts underlie, or they tend to be abstract, and they underlie a set of very important later-developing systems of knowledge, including knowledge that allow us to form taxonomies of objects, of tools, of natural kinds like animals and plants, and to reason about their behavior, such that when we encounter some new thing, we already know a lot about the kind of thing that it is and can use that to infer many of its specific properties, and also to direct our learning very explicitly to fill in the gaps in our knowledge. Another is the systems of natural number in Euclidean geometry. Natural number, children seem to construct over the first three to five years of life. Euclidean geometry seems to take much longer, much, much later. Molly Dillon, who's also here, has been trying to work on understanding-- and so has Veronique Izard-- how children go from six years of age, where they seem absolutely clueless about the simplest properties of Euclidean geometry, to 12-year-olds who, whether they're in the Amazon and have never been to school, or studying geometry in school, seem to have a basic rudimentary understanding of points and lines and figures on the Euclidean plane. A third is a system of persons and mental states. And I won't talk about it, but I'm only talking for the first half or so of this time, then Alia Martin's going to take over. And you'll touch on-- you'll get to some of those issues. Now, as Nancy said last week, I have this out-there hypothesis that I don't think anybody else in the world believes, but I still believe it. That this productive combinatorial capacity either is or is intimately tied to what's the most obvious cognitive difference between us and other animals, namely our faculty of natural language. In particular, I think that there are two general properties of natural language that make it an ideal medium for forming combinations of new concepts. One is that the words and the rules of-- well, three properties, actually. One is that the syntactic and semantic rules of natural languages are combinatorial and compositional. That is, if you learn the meanings of words and you learn how to combine them, you get the meanings of the expressions for free. You don't need to go out and learn what a brown cow is if you know what brown is and you know what a cow is. Second, the words and the rules of natural language apply across all domains. They're not restricted to one domain or another the way infants' other cognitive capacities seem to be. So if you learn how "cow" behaves in the expression "brown cow," and then you hear "brown ball," or something that a different domain of core knowledge would be capturing, you can immediately interpret that combination as well. And then the last thing about natural language that I think makes it so useful for cognitive development is that it's learned from other people. And other people talk about the things that they find useful to think about, right? Word frequency is a really good proxy for what the useful concepts out there are. So a child who has a very powerful combinatorial system that can create a huge set of concepts is going to have a search problem when they try to apply those concepts to the world. Something will happen in the world. And if they now have a million concepts that they could bring to bear, which one are they going to use? Are they to test them all out? Having too many concepts, too many innate concepts, would not necessarily be a blessing. But if you use language to guide you to the useful concepts, I think you'll do better. The ones people are going to talk about around you most frequently are going to be the ones that it's going to be most useful for you to be learning at that point. So let's go back to that first set of questions, which is what I want to be focusing on today. And as I said, I'll talk particularly about three domains where infants seem to develop knowledge quite rapidly over the course of infancy. And I'll spend most of my time on the first one, objects. So object cognition is really interesting and it seems to span this really big range. It seems to involve many different kinds of processes. If you're going to figure out what the objects are, what the bodies are in a scene, then you need segmentation abilities. You need to be able to take an array like this and break it down into units, figuring out what different parts of that array lie on the same object and what parts lie on different ones. So early mechanisms for doing that can participate in object representation. But also to perceive objects, arrays are cluttered and objects tend to be opaque. And when they are, it's never the case that all of the surfaces of one object are in view at the same time. And it's often the case that you're only seeing a little bit of any given object at a time. Yet somehow we're able to see this as a continuous table that's extending behind everything that's sitting on it, and even sort of as a continuous plate, a single plate that's partly-- that's on the table behind the vase, and so forth. So to represent objects, we've got to be able to take these visual fragments and put them together in the right sorts of ways. Something that's harder to show in a static image, but that of course is radically true about the world is that our perceptual encounters with objects are intermittent. We can look away and then look back, or an object can move out of view and then come back into view, yet what we experience is a world of persisting objects that are existing and moving on connected paths, whether we're looking at them or not. And finally, objects interact with other objects and we need to work out those interactions. And the working out that I'm interested in is not what this little boy is doing, but what his younger sister is doing as she's sitting in her infant seat and observing him acting on these towers and wondering what's going to happen next. OK? At least that's the problem on the table for today. OK, so a standard view for a very long time has been that different mechanisms solve these different aspects of the problem of representing objects. That segmentation depends on relatively low-level mechanisms. Completion and identity through time, it's going to depend on how much time we're talking about and how complicated the transformations are. They're sort of in the middle. And this is all about reasoning, about concepts that go beyond perception altogether, like the mass of an object, which we can't see directly, and so forth. And I kind of believed that that was true when we started doing this work. And because I did and wanted to know where the boundaries were of what infants could do, I started by working on these problems here. And that's what I'm going to talk about today. But let me flag at the outset that I no longer believe that the real representations of objects that organize infants' learning about the physical world, I no longer believe that they're embodied in a set of diverse systems. I think there's a single system that's ultimately at work here. Of course it has multiple levels to it, including low-level of edge detection, and so forth. But that there's a single system at work that both-- that tells us what's connected to what and where the boundaries of things are in arrays like this, how things continue where and when they're hidden, and how they interact with other things. That's one unitary system, and I'll try to show you what evidence supports that view, though, of course, jump in with questions or criticisms or alternative accounts. OK, so here's an intermediate case to start with. You present a-- it was studied a lot by Belgian psychologist Elvin Meshot back in the 1950s, I think-- '50s or early '60s. Take a triangle, present it behind an occluder, and ask babies, in effect, what do you see in that triangle? Do you see a connected object or do you see two separate visible fragments? We did these studies with four-month-olds because they're not yet reaching for things and manipulating objects. We used the fact that they tend to like to look at things that are new. So we presented this display repeatedly-- we, by the way, is Phil Kellman, now at UCLA and studying all this stuff in adults primarily, also studying mathematics now. Anyhow, so we presented displays like this repeatedly to babies until they got bored with them. And then we took the occluder away and in alternation, presented them with a complete triangle and with a triangle that had a gap in the center. And we reasoned that there were two possible outcomes of the study. Possibility one is that as empiricists and the then-very influential child psychologist-- developmental psychologist Jean Piaget argued, for a four-month-old infant who isn't yet reaching for things, the world is an array of visible fragments. So they will see this thing as ending at this edge where the occluder begins, and this display will look more similar to them than this display, so they'll be more interested in that one. There was also the theory from Gestalt psychologists and others that predicted the opposite, that there would be automatic completion processes that would lead any creature, whether they were experienced or not, to perceive the simpler arrangement, which is this one. Those, it seemed to us, were the only two options. Baby research is really fun because it can surprise you even when you think you've covered all the bases. Neither of those turned out to be true. What happened instead was that when we took the occluder away, you still saw an increase in looking both to the connected object and to the separate object, and those two increases were equal. Now, this could have been for an extremely boring reason. Maybe babies were only paying attention to the thing that was closest to them in the array. So we very quickly tested for that in the following way. Instead of contrasting an array with a small gap to an array that had it filled in, we contrasted an array with a small gap to an array with a larger gap, too large to have fit behind the occluder. And there, babies looked longer at the array with the larger gap. So we know it's not that they're not seeing this back form and its visible surfaces, but they seem to be uncommitted as to whether those surfaces are connected behind the occluder or not. They don't see them as ending where the occluder begins, but they don't clearly see them as connected, either. And we showed that this was quite generally true, both for simpler arrays and for more complicated-- well, for richer ones, like a sphere. We did this with a bunch of different arrays. And under these conditions, where the arrays are stationary, that's what we found. But there was one condition where we got a different finding, and that's when we took one of these arrays and moved it behind the occluder, never moving it enough to bring the center into view, but moving it enough such that the top and bottom were moving together. And when we did that, now babies looked longer at the display that had the gap. That raised the question, why is motion having this effect? And the immediate possibility, we thought, is motion is calling their attention to the rod, so they're tending to it more than they otherwise would, and it's leading them to see its other properties, like the alignment of its edges. So to test that, we gave them misaligned objects differing in color, differing in texture. All of the edges-- none of the edges were aligned with each other. If motion was just calling attention to alignment, it shouldn't do that in this case. But in fact, we found that after getting bored with that, infants expected something like this, not something like that. They looked longer at the display with the gap. So it looks like the motion is actually providing the information for the connectedness, and the alignment is not playing much of a role at all. Now, what could be going on here? This is the kind of thing I think that Josh likes to call a suspicious coincidence, right? That an infant is looking at this array, and isn't it odd that we're seeing this-- I'm seeing the same pattern of motion below the occluder as I'm seeing above it? Now that could be two separate objects that just happen to be moving together, but that would be rather unlikely. You're much more likely to see a pattern like that if in fact there's a between it and it's just one object that's in motion. I think that's probably the right way to think about what's going on in these experiments. But if it is, notice that not all coincidences that are suspicious for us are suspicious for infants. For us, it's a suspicious coincidence that this edge is aligned with that edge. For infants, it's not. I think this is a case where we can see infants can be useful for thinking about our own cognitive abilities because they seem to share some of our picture of the world, but not all of our picture of the world. And that can be a hint as to how that picture gets put together and how it's organized. So what kind of motion? We've tried a bunch of different ones. One of them is vertical motion. That's interesting because it's also a rigid displacement or motion in depth. They're both rigid displacements in three-dimensional space. Actually, all of these three are. But in this case, you don't get any side-to-side changes in the visual field. I think I animated this. Yeah. So this is kind of what the baby is seeing. By the way, all of these studies were done with real 3D objects and they had textures on them, and so forth. They've also all since been replicated in other labs using computer animated displays, which we didn't have-- which weren't available back in the day. And you get the same result. So I'm just doing cartoon versions of them here, but actually babies showed these effects across a range of different displays. So there's vertical motion. Here is motion in depth. Oh, and by the way, we're not restraining babies' heads, so it's not going to be anything near as, like, simple uniform as what's at their eye, is what I'm showing here. And then rotational motion, like that, around the midpoint. And what we found is that babies used both vertical motion and motion in depth about as well as they used horizontal motion to perceive the connectedness of the object. They did not use rotary motion. So I know there's a lot of interest and projects focused on perceptual invariance. And I think there's an interesting puzzle here, and it's one that Molly is very interested in, in the work that she's doing on geometry. These are all rigid motions. But somehow, rotation seems to be a whole lot harder for young intelligent beings to wrap their heads around than translation is-- including translation in depth or vertical translation. There's something hard about orientation changes. And in fact, I think they remain hard for us as adults. If you think of things like how the shape of a square seems to change if you rotate it 45 degrees so it's a diamond. It's no longer obvious that it's got four right angles. There's something about orientation that's harder than these other things. And I think we were seeing that here. When an object-- when a baby is sitting still and a rod is moving behind an occluder, it's moving both relative to the baby and relative to the surroundings, which of those things matters to the baby? So Phil Kellman did the ambitious experiment of putting a baby in a movable chair and moving the baby back and forth. In one condition, the baby is looking at a stationary rod, but his own motion is such that if you put a camera where the baby's head is, you'll see the image of that rod moving back and forth behind the block. In the other condition, the motion of the rod is tied to the motion of the baby, so it's always staying in the middle of the baby's visual field, but it's actually moving through the array. And it turned out that it's-- OK, so whether the baby was still or moving didn't matter at all. So if the object is-- sorry, I did these wrong. This should be still, that should be moving. If the object is still, and whether the baby is still or moving, it doesn't work. If the object is moving-- the diagram is right. It was just my label that's wrong. If the object is moving, it doesn't matter whether it's being displaced in the infant's visual field or not. It's seen as moving. Now, this isn't magic. The studies are not being done in a dark room with a single luminous object where the baby wouldn't be able to tell. There's lots of surround-- it's in a puppet stage and that is stationary. So there's lots of information for the object moving relative to its surroundings in all of the conditions of this study, and I'm sure that's critical. But for the point of view of the infant's connecting of the visible ends of the object, the question he's trying to answer is, is that thing moving? Not, am I experiencing movement in this changing scene? Retinal movement. So if it's the case that-- what those last findings suggest is that the input representations to the system that's forming objects out of arrays of visual surfaces already capture a lot of the 3D spatial structure of the world. This is a relatively late process. And it allows us to ask, is it even specific to vision? Would we see the same process at work if we presented babies with the task of asking, am I feeling-- are two things that are moving in the world connected? Or are they not, in areas that I'm not perceiving? We can ask that in other modalities. So we did a series of studies-- this is with Arlette Streri. We did a series of studies looking at perception of objects by active touch. By taking four-month-old babies and putting a bib over them. Now I said they can't reach for objects, but if you put a ring in a baby's hand, even a newborn's hand, they'll grasp it. So we put rings in their two hands. And in one condition, the rings were rigidly attached, although the array was set up so that they couldn't actually feel that attachment and they couldn't see anything, about the object, anyway, because they had the screen blocking them. But as they moved one, the other would move rigidly with it. In the other condition, the two were unconnected, so they would move independently. And after babies explored that for-- over a series of trials, and as in the other studies, we then presented visual arrays in alternation where the two rings were connected or not. And found that in the condition where they had moved rigidly together, infants extrapolated a connection and looked longer at the arrays that were not connected. In the case where they moved independently, they did the opposite. Now, that doesn't tell us that there is a single system at work here. It could be that there are, as Shimon, I believe, was saying yesterday afternoon, there are redundancies in the system. You have different systems that are capturing the same property. That's still true. But here's a reason to-- we went on to ask not only what infants can do, but what they can't do. And I think it gives us reason to take seriously the possibility that there's actually a single system at work here. What we did-- I haven't pictured it here-- is, instead of varying the motion of the things, we did vary their motion, but we also varied their other properties. So their rigidity. We contrasted a ring that was made out of wood with a ring that was made out of some kind of spongy, foam-rubbery material-- their shape, their surface texture. Asking, do infants take account of those properties in extrapolating a connection? Are they more likely to think two things are connected to each other if they're both made of foam rubber than if one of them is made of foam rubber and the other is made of wood? We never found any effect of those properties, just as we didn't in the visual case. So we see not only the same abilities, but the same limits. And while that's not conclusive, I think it adds weight to the idea that what we could be studying here-- we started in the visual modality. But what we could be studying here is something that's more general and more abstract. Basic notions about how objects behave that apply not only when you're looking at things, but when you're actively-- when you're feeling them, actively manipulating them, exploring them in other modalities. So I put a question mark because it's not absolutely conclusive, but I think we should take seriously that possibility. OK. Only motion. Is motion the only thing that works? Or will other changes work, so if an object changes in color? We created a particularly exciting color change by embedding colored lights within a glass rod so it's flashing on and off. Succeeded in eliciting very high interest in that array. Babies looked at it for a long time, but only the motion array was seen as connected behind the occluder. So it looks like not all changes elicit this perception. It's an open question what the class of effective changes is. Maybe it's broader than just motions, but it doesn't seem like all changes work. Finally, is motion the only variable that influences infants' perception of-- the only property of surfaces that influences infants' perception of objects? The answer to that seems to be no. So we studied that in a different situation for which this is just a very impoverished cartoon. We took two block-like objects-- of different colors and textures in some studies, same color and texture in others. It didn't matter. And put one on top of the other and either presented them moving together or moving separately. And then tested whether babies represented them as connected in either of two ways. Some of the studies were done with babies who were old enough to reach. And then we could ask, are they reaching for it as if it were a single body or as if there were two distinct bodies there? I could give you more information about that if you're interested. The other was with looking time, where we had a hand come out and grasp the top of the top object and lift it. And the question is, what should come with it? Will the bottom object come with it as well or will the top object on its own? When the things had previously moved together, they expected it all to move together. When they'd moved separately, they expected only the top object would move by itself. And when there was no motion at all, findings vary somewhat from one lab to another, but mostly they tend to be ambiguous in the case where there's no motion. So there it looks like motion is doing all the work. But if you make one simple change to this array that you can't do in the occlusion studies, you simply change the size of this object and present it such that there's a gap between the two objects. And you can either do it with this guy floating magically in midair, or you can do it with two objects side by side, both stably supported by a surface. If there's a visible gap between them, the motion no longer matters. They will be treated as two distinct objects, no matter what. So what I think is going on here is that babies have a system that's seeking to find the connected, the solid connected bodies. The bodies that are internally connected and will remain so over motion. And that's what's leading them to see these patterns of relative motion or these visible gaps as indicating a place where one object ends and the next object begins. I did want to get on to the problem of tracking objects over time, perceiving not what's connected to what over space, but what's connected to what over time. Under what conditions are the thing that I'm seeing now the same thing that I was seeing at some place or time in the past? So conceptually, it feels like continuity of motion over time is related to connectedness of motion over space. And it's been tested for in a variety of ways. Here's one set of studies that we did, where we have an object that moves behind a single screen, and then either is-- and it starts here, ends up here. And either is seen to move between the two screens or is not. And we ask babies in effect, how many objects do they think are in this display, by boring half the babies with this, half the babies with that, and then presenting them in alternation with arrays of one versus two objects, neither of which ever passes through the center, but the arrays differ in number. In the one case, it's either moving over here or it's moving over there on different trials. And what we find is that in this case, they expect to see one object and look longer at two. In this case, they expect to see two objects and look somewhat longer at one. There's actually an overall preference for looking at two, but you get that interaction and there's a slight preference for looking at one in that condition. Providing evidence, I think, that babies are tracking objects over time by analyzing information for the continuity of-- or discontinuity of their object motion. Now, Lisa Feigenson has conducted stronger tests of this, I think, with somewhat older babies. When babies get older and they do more, you can do stronger tests. So these are babies who are old enough to crawl, old enough to eat, and old enough to like graham crackers. So she puts the baby back here, and in one set of studies, she takes a single graham cracker, puts it in one box, and then takes two graham crackers, one at a time, and puts them in the other box. And then the baby, who's being restrained by a parent, is let loose. And the question is, which box will they go to? And they go to the box with the two graham crackers. My favorite study, though, in this whole series was one that she and Susan Carey ran as a boring control condition. I think it's the most interesting of the findings, though. In the boring control condition, they were worried about the fact that maybe babies are going to the box with two because they see a hand around that box for a longer period of time, doing more interesting stuff. So they did the following boring control. The two condition was the same as before. So a hand comes out with a single graham cracker, puts it in the box, comes out empty, takes a second graham cracker, returns with a second graham cracker, puts it in the box. In the other condition, the hand comes out with one graham cracker, puts it in the box, comes out again with the graham cracker, and then goes back into the box with that graham cracker. So you've got more graham cracker sightings on the left. You've got a same amount of hand activity on the two sides, but the babies go to the box with two. They're tracking the graham crackers, not the graham cracker visual encounters. They're tracking a continuous object over time. Finally, objects. Scenes don't usually just contain a single object that's either connected, continuously visible or not, or connected or not. They contain multiple objects and those objects interact with each other. Shimon talked yesterday afternoon about the evidence that babies are sensitive to these interactions, at least down to about six months of age in the conditions he was talking about. In slightly different conditions, the sensitivity has been shown as young as three months of age. Basically, here's a paradigm that will show that, if you have a single object that's moving toward a screen. Another object is stationary behind the screen. But at the right time, the time at which this object, if it continued moving at the same rate, at the point where it would contact that object, this object starts to move in the same direction. And now, after seeing that repeatedly, the screen is taken away and babies either see the first object contacting the second and the second one immediately starting to move, or they see the first object stopping short of the second an appropriate gap in time, and then the second object starts to move. And they look longer at this display, providing evidence that they inferred that the first object contacted the second at the point at which it started to move. Interestingly, as in the case of the occluded object studies, if instead of having the second object move, you have it change color and make a sound, so it undergoes a change in state, but no motion, the babies no longer infer contact in this condition. They are attentive to those events. They watch them a lot, but they're uncommitted as to whether that first object-- this is work of Paul Muentener and Susan Carey relatively recently. It wasn't done with cylinders, it was done with a toy car that hits a block, I think, or doesn't hit the block. They're uncommitted as to whether the car contacted the second object or not, if the second object changes state but doesn't move. Returning to the case where they succeed-- namely, this thing went behind a screen, the other thing started to move, infants inferred that they came into contact-- that begins to suggest that maybe babies have some notion that objects are solid, that two things can't be in the same place at the same time, that when one moving thing hits another thing, one or the other of them or both, their motion has to change, because they're not going to simply interpenetrate each other. And Josh already very briefly pointed to some very old studies suggesting that babies have-- make some assumption that objects are solid as early as-- I think in the earliest studies done with babies it's about two and a half months of age. These are these studies that Renee Baillargeon did that start with simply a screen, a flat screen, rotating on a table, rotating 180 degrees back and forth on a table. Then she places an object behind this wall. The screen is lying on the table with its back edge right here at the middle. She places an object behind it, and then the screen starts to rotate up around the back edge and the question to the infants in effect is, what should happen to that screen? And the two options she presents to them is it either gets to the point where it would contact this object which is now fully out of view, and stops, and then returns to its first position, which is a novel motion, but consistent with the existence, location, and solidity of that hidden object. Or it continues merrily on its way and the same pattern of rotation as before. When it does that, of course, it's going to come back flat on the screen and there's not going to be any object there. If there had been an object, it would have had to be compressed. Or what I think actually went on in those studies, it was quickly and surreptitiously knocked out of the way. And infants looked less at this event than at this one-- this one, sorry-- providing some evidence that they were representing these objects, both as existing when they were out of sight, and as solid. So this is just a summary, not a claim about knowledge development, about-- I'm attempting to characterize here with motion over just one dimension of space and time, how infants seem-- what infants seem to represent about the behavior of objects. Namely that each object moves on a continuous path through space and over time. That it moves cohesively. It doesn't split into pieces as it's moving. So if you've seen something move like this, then you find it unlikely that if this were lifted, it would go on its own, and you look longer at that. There is no merging, where two things that previously moved independently now move together. So after looking at this, it would also be unlikely, if you lifted this, for the whole thing to jump up at once. They move without gaps. They move without intersecting other objects other objects on their paths of motion, such that two things are in the same place at the same time. And they move on contact with other objects and not at a distance from them. So that's just a summary of what I think these studies show about four-month-old infants, not newborns. They also show that infants' perception of objects is really limited. There's all these situations under which we see unitary, connected, bounded objects when they don't. And interestingly, research by Fei Xu and Susan Carey shows that even when you present really quite surprisingly old infants, 10-month-olds, with objects that should be really familiar to them, like toy ducks and trucks, they don't assume that these two objects will be distinct if they undergo no common motion. If they're simply presented stationary, the babies seem uncommitted as to whether there's a boundary between them or not. So they're using very limited information to be making these basic-- building these basic representations of what's connected to what, where one thing ends and the next begins. Now, this changes very abruptly between about 10 and 12 months of age. They start treating those as two separate objects, whether they're moving together or stationary or not. Now, infants' tracking of objects shows very similar limits. So I told you they succeed in perceiving-- representing two distinct objects in a situation like this. But up until and including 10 months of age, they fail in this situation. If a truck comes out on one side of a single large screen, so you're not getting information for the motion behind that screen, and a duck comes out on the other side, and you ask babies, in effect, how many things are there? One or two? By removing the screen and alternately presenting those two possibilities, they are uncommitted between those two alternatives. In this situation as in the previous one, there's this very abrupt change between about 10 and 12 months of age. And I can't resist saying, even though I'm way over time, that Fei Xu has shown that that change is interestingly related to the child's developing mastery of expressions that name kinds of objects. So she's been able to show, for example, that if you simply ask for individual infants, when did they start succeeding here, their success is predicted by their vocabulary as reported by parents. She's also shown that if you take a younger infant who would be slated-- destined to fail this study, but as you bring objects out on the two sides, either familiar ones or novel ones, starting at about nine months of age, if you name them and you give them distinct object names, they now infer two objects. And in fact, they'll even do it if the two things you bring out from behind a single wide screen look the same. If you bring one thing out and say, look, a blicket, and put it back in, and then bring something out and say, look, a toma, even if it looks the same, they'll infer two objects. So there seems to be this change that's occurring at the end of the first year quite dramatically that's overcoming this basically meant that we're seeing earlier on. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Tutorial_1_Leyla_Isik_Introduction_to_Visual_Neuroscience.txt | the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu so I'm gonna go just go over some very basic neuroscience mostly terminology just for people who have very little to no neuroscience background when you hear the rest of the talks you will think like what does it mean that that they're talking about spiking activity or what is fMRI measuring so that's like the level at which this is so so my disclaimers are one like I said that it's very basic and to that it will be CBMM and vision centric because the goal is to get you ready for the rest of this course and so please don't think that this is an exhaustive or what I think is an exhaustive summary of basic neuroscience so just to give you a brief outline first we'll talk about the basics of neurons and their firing basic brain anatomy how people measure neural activity in the brain both invasively and non-invasively and then a brief rundown of the visual system this is a neuron and it has dendrites and axons and the signal is propagated along the axon and the axon terminates on another cell and when they - one one neuron terminates on another tip neuron and they if they form what's called a synapse so here are some pictures sorry it's hard to see on the projectors of neurons synapsing on other neurons and that is how neurons communicate they send electrical activity down their axon and it reaches the next cell and the synapse is both an electrical and chemical phenomenon we're not going to get into the details of that but if you're interested I encourage you to Wikipedia neurons have a ion gradient across them so there is a different concentration of certain types of ions inside and outside of the cell and there are ion channels along the cell and these ion channels are voltage-gated so what happens is when when these ion channels open the voltage inside the cell changes and that eventually leads to a neuron to fire and they fire what is known as an action potential so it's it's possible for neurons voltage to change a little bit and that is known as potentiation so they can get either excitatory or inhibitory potentiation so that means either higher or lower activity as shown here and then once it reaches a certain threshold they fire what's known as an action potential so action potentials are all or none firing and that's what is referred to as neural firing or neural spiking it's this actual spike in the voltage that is all you need to know like when people are talking about neural spiking they're talking about that actual action potential but oftentimes we're not measuring things at the level of single spikes so I'll get into in a little bit about what people are actually measuring and what they're talking about when they're talking about different recording techniques all right this is some basic brain anatomy this is a slice of the cortex and just to orient you I'm gonna put these all online just so you know the terminologies but there are different lobes that occipital lobe is in the back that's where early visual cortex is temporal lobe parietal lobe and frontal lobe and if people are talking about the inferior part of the brain they mean the bottom superior top etc and this is a rough layout of where different sensory it can people see that kind of okay different sensory and motor cortices and where they land on the cortex so Nancy's gonna give a really nice introduction to the functional specialization of the brain this is just some basic anatomical terms to familiarize you all all right so neural recording so when we're talking about invasive neural recordings the first type that we'll talk about is electrophysiology so single and multi-unit recordings and what that means is that somebody actually sticks an electrode into the brain of an animal and records their neural activity so this can either be a single unit recording which means you are recording from a single neuron and either by sticking the electrode inside or on top of the neuron or very close to the neuron and that means that you're close enough that you're only picking up the changes in electrical activity from that one neuron but what's more commonly measured now or is multi unit activity that means that you stick an electrode in the brain and it's picking up activity from a bunch of neurons around it so you can either take that data and get what's known as the local field potential so that is the changes in potential in general in that whole group of neurons and people often analyze that data or you can do some sort of pre-processing to figure out how many neural spikes you're getting so that's typically trying to look at the neural firing so from that activity you can either get the spiking pattern or what people refer to as the local field potential and then you've probably heard you will hear a lot about acog data from Gabrielle and others this time um so this is really exciting it's the opportunity to record from inside the human brain from patients who have interact pharmacological intractable epilepsy so sorry this is kind of gross but to when people are having seizures if surgeons want to resect that area they first have to map very carefully where the seizures are coming from and what else is around there to make sure that they are helping the patient so to do that they place a grid of electrodes on the surface of the subjects cortex and then use leave that they're off in for a week for several days and while they do different types of mapping in that area so this provides the opportunity for scientists like Gabrielle to then go and test the neural activity in those humans which is a very rare opportunity to be able to record invasively from humans and again since we're on the surface of the brain this is not single unit activity so you get something that is more signal more similar to the lfp type signal and then what I and many other people in the center do is also neural imaging so this is non invasive often in humans although people also do it in animals as well and the main types you'll probably hear about at this course are M eg an EEG which are very similar and functional MRI so when many neurons fire synchronously so the neurons in your cortex have the nice property that they're all aligned in the same orientation so when they fire at the same time you actually get a weak electrical current and that electrical current causes a change in both the electric and magnetic field around it and EEG and mu G measure the changes in electric e and magnetic M fields from those neuro firings but it's usually on the order of like tens of millions of neurons that need to be firing so we're now at a much larger scale than we were with the invasive recordings and because the neurons ought to be firing at the same time usually they're not all firing an action potential because if you remember it was just this very brief spike you're just measuring kind of the changes in the potentiation of that whole group important cortical neurons so this is a very coarse measure but it's a direct measure of neural firing so it has very good temporal resolution so the question was about I don't know if everyone heard the temporal scale of MU G so it's it's millisecond temporal resolutions I think you can maybe even get higher after Moran the other hand usually has a temporal resolution of seconds a couple seconds but the spatial resolution of fMRI is on the order of millimeters whereas it's more like centimeters in Ameche and actually so the problem in mu G and EEG is you're recording from here's a picture of the MU G scanner so the patient or sorry subject sits in it and there's this helmet that goes around their head and that helmet has 306 sensors if it was an EEG they would be wearing a cap you've probably seen an EEG cap before and the electrodes would be directly contacting their scalp so you're measuring activity from a hundred to three censors and often you're trying to measure estimate the activity in the cortex underneath and so that is on the order of like 10,000 sources and so it's a very ill posed problem meaning that there's not a unique solution to go from sensors to cortex and so because of that we don't actually that's why they say that the spatial scale is so poor but actually it's not a well-defined problem so it's hard to even know where the activity is originating from but that's a very active area of research for how you can contribute it as being on the order of centimeters so the other main type of non-invasive neural imaging we'll talk about is functional MRI so here's a picture of an fMRI scanner subjects laying there and often if we're doing a visual task they look at stimuli on a mirror that reflects from a screen where they're viewing this where we're presenting the stimuli so fMRI measures the changes in blood flow that happen when neurons fire and so as a result this is not a direct measure so this is not a direct measure of the actual neural firing so it's a it has a longer latency for the blood flow effects to occur and so that's why it has the temporal scale that's more like a couple of seconds but it has quite good spatial resolution there's structural MRI which if any of you have ever been injured you may have had an MRI and that that measures the actual it doesn't measure the blood flow it measures the actual structures underneath I mean often people will do an MRI and a functional MRI and Co register the two so you have a very precise anatomical image that you can then put the brain activity on okay so I gone into this event so invasive electrophysiology is the highest resolution data both spatially and temporally I think that most neuroscientists collect but it has some advantages one that it's invasive so it's hard to test questions in humans and just more difficult in general and to you're limited by brain coverage so you can only stick a grid or an electrode in a couple of brain regions at once so you really can't get information from across the whole brain at this resolution with the technologies we currently have fry on the other hand has broad coverage and good spatial resolution but lower temporal resolution and EEG and ma G have high temporal resolution broad brain coverage but low spatial information alright so a bit about visual processing in the brain um so this is a diagram can you see the colors okay a little sorry of roughly what people think of as visual cortex so the blue in the back is primary visual cortex or v1 that's the earliest cortical stage that where visual signals originate and then there is what's known as the ventral stream which is often called the what pathway or where people roughly believe object recognition occurs and the dorsal stream which is often known as the where pathway which is thought to be more implicated in spatial information however this is an extreme oversimplification I think Tommy put up this wiring diagram the other day this is still a simplification but a more realistic box diagram of all the different each box represents a different visual region you can see that there's connections between all of them between the ventral and dorsal stream and while we roughly think of it as feed-forward which means that the input from one layer the output from one layer serves as input to the next often there's feedback connections meaning that information can flow between areas so that's why it's such a challenging been so challenging to probe with physiology okay so like I said there are many layers and I thought to be roughly organized hierarchically into the first level primary visual cortex in that area you have cells that respond to oriented lines and edges so a cell will I'll show an example of this but fire for stimuli that it sees there that are in a certain orientation and a certain place and that is known as the cells receptive field and so it's often thought of as an edge detector it's very analogous to a lot of edge detection algorithms in computer vision for example but then at what's thought of to be the top layer of the ventral stream inferior temporal cortex cells respond to fire in response to whole objects and it's not just a specific orientation that they like they will see this they will fire whether they see this object at different positions and also have some tolerance to viewpoint and scale as well okay so a lot of what we know about the visual system stem from fugle and weasels seminal work in the 1960s looking at cells and cat reporting from cells and cat v1 this is the stimulus that they're showing to the cat it's an anesthetized cat and they're recording until you'll hear a popping and those pops are that other the neural activity that they're recording so they're recording from a single cell right now so you see you can hear any time they present that light bar in that specific position the cell fires and then as soon as they move it out of the bar the salt stopped firing so that specific cell really likes this bar in this orientation and they call this a simple cell maybe I can test for a little bit they also show okay and then they found that there were these other types of cells sorry they showed that if you rotate it it doesn't fire at all and then they showed that there were these other types of cells this is maybe not the myth you want there are other cells that fire not only to that specific position but to slight shifts in that position as well and so it seemed like those cells formed an aggregate over the simple cells and they called those cells complex cells and then people did similar things in mostly macaque i.t and so they found that in contrast to a lot simple lines and edges cells here fired in response to hands so this is showing the cells response here so this this is the number of spikes over time so it fires a lot two hands and it fires to that hand the cell likes that hand no matter what position you show it in but it doesn't like these kind of other more simple objects and this one is not selected for faces so in IT there are cells that are selected for very high you would think of as high level objects and they're tolerant to changes in those objects so people have done many more sophisticated studies of this this is an example from Gabrielle and Jim DeCarlo and chou-heung where they showed using neural decoding so applying a machine learning algorithm to the output of many cells that these cells were again very specific for certain objects but invariant to different transformations so in particular here they showed this monkey face at different sizes and they showed that if you train they showed that the the self fired there was information present in the population of neurons for this specific monkey face regardless of what size you showed it at so these cells are often thought to be of so it's often thought that as you move along the visual hierarchy cells become more selective so meaning they like more specific objects and more invariant so more tolerant to changes in position in different transformations and so the other thing I wanted to talk about was hierarchical feed-forward so computational models of the visual system because Tommy mentioned this briefly and I think it'll tie into a lot of the computer vision work you'll hear about um so these are inspired by Hubel and Wiesel findings and visual cortex so meaning and I'm gonna talk both about the hmx model which is the model developed by Tommy and others in his lab which is a simpler more biologically faithful model but this sort of architecture is also true of deep learning systems that you heard a lot about recently and that have had a lot of success in computer vision challenges to have an input image you can then have a set of simple cells again these are inspired by Google and weasels findings so they are oriented lines and edges so this cell will fire if you have an edge that's oriented like this at that part of the image and so again it's just a basic edge detector and so these perform template matching between their template which is in this case an oriented bar and the input image to build up selectivity and then there are complex cells and these complex cells pool or take a local aggregate measure to build up invariance and so what that means is if you have say this red cell here this complex cell would look at these four simple cells so you are now selective to that oriented line not just at this position but at all of these positions and that gives you some tolerance to changes in position so you'd be able to recognize the same object whether it have this feature whether it was presented at this corner or in a local area and so the way you do that is you take a max over the response of all those input cells and then you can repeat this for many layers and it's essentially the same thing as a multi-layer population convolutional neural network and at the end in this h max model you take a global max over all scales and positions so in theory of all these more complex features that you can now respond to regardless of where in the image and how large their presented you |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_22_Josh_Tenenbaum_Computational_Cognitive_Science_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOSH TENENBAUM: We're going to-- I'm just going to give a bunch of examples of things that we in our field have done. Most of them are things that I've played some role in. Maybe it was a thesis project of a student. But they're meant to be representative of a broader set of things that many people have been working on developing this toolkit. And we're going to start from the beginning in a sense-- just some very simple things that we did to try to look at ways in which probabilistic, generative models can inform people's basic cognitive processes. And then build up to more interestingly kinds of symbolically structured models, hierarchical models, and ultimately to these probabilistic programs for common sense. So when I say a lot of people have been doing this, I mean here's just a small number of these people. Every year or two, I try to update this slide. But it's very much historically dated with people that I knew when I was in grad school basically. There's a lot of really great work by younger people who maybe their names haven't appeared on this slide. So those dot dot dots are extremely serious. And a lot of the best stuff is not included on here. But in the last couple of decades, across basically all the different areas of cognitive science that cover basically all the different things that cognition does, there's been great progress building serious mathematical-- and what we could call reverse engineering models in the sense that they are quantitative models of human cognition, but they are phrased in the terms of engineering, the same things you would use to build a robot to do these things, at least in principle. And it's been developing this toolkit of probabalistic generative models. I want to start off by telling you a little bit about some work that I did together with Tom Griffiths. So Tom is now a senior faculty member at Berkeley, one of the leaders in this field-- as well as a leading person in machine learning, actually. One of the great things that he's done is to take inspiration from human learning and develop fundamentally new kinds of probabilistic models, in non-parametric Bayes in particular, inspired by human learning. But when Tom was a grad student, we worked together. He was my first student. We're almost the same age. So at this point, we're more like senior colleagues than student advisor. But I'll tell you about some work we did back when he was a student, and I was just starting off. And we were both together trying to tackle this problem and trying to see, OK, what are the prospects for understanding even very basic cognitive intuitions, like senses of similarity or the most basic kinds of causal discovery intuitions like we were talking about before, using some kind of idea of probabilistic inference in a generative model? And at the time-- remember in the introduction I was talking about how there's been this back and forth discourse over the decades of people saying, yeah, rah rah, statistics, and, statistics, those are trivial and uninteresting? And at the time we started to do this, at least in cognitive psychology, the idea that cognition could be seen as some kind of sophisticated statistical inference was very much not a popular idea. But we thought that it was fundamentally right in some ways. And it was at the time-- again, this was work we were doing in the early 2000s when it was very clear in machine learning and AI already how transformative these ideas were in building intelligent machines or starting to build intelligent machines. So it seemed clear to us that at least it was a good hypothesis worth exploring and taking much more seriously than psychologists had much before that. That this also could describe basic aspects of human thinking. So I'll give you a couple examples of what we did here. Here's a simple kind of causal inference from coincidences, much like what you saw going on in the video game. There's no time in this. It's really mostly just space, or maybe a little bit of time. The motivation was not a video game, but imagine-- to put a real world context on it-- what's sometimes called cancer clusters or rare disease clusters. You can read about these often in the newspaper, where somebody has seen some evidence suggestive of some maybe hidden environmental cause-- maybe it's a toxic chemical leak or something-- that seems to be responsible for-- or maybe they don't have a cause. They just see a suspicious coincidence of some very rare disease, a few cases that seem surprisingly clustered in space and time. So for example, let's say this is one square mile of a city. And each dot represents one case of some very rare disease that occurred in the span of a year. And you look at this. And you might think that, well, it doesn't look like those dots are completely, uniformly, randomly distributed over there. Maybe there's some weird thing going on in the upper left or northwest corner-- some who knows what-- making people sick. So let me just ask you. On a scale of 0 to 10, where 10 means you're sure there's some kind of thing going on and some special cause in some part of this map. And 0 means no, you're quite sure there's nothing going on. It's just random. What do you say? To what extent does this give evidence for some hidden cause? So give me a number between 0 and 10. AUDIENCE: 5. JOSH TENENBAUM: OK, great. 5, 2, 7. I heard a few examples of each of those. Perfect. That's exactly what people do. You could do the same thing on Mechanical Turk, and get 10 times as much data, and pay a lot more. It would be the same. I'll show you the data in a second. But here's the model that we built. So again, this model is a very simple kind of generative model of a hidden cause that various people in statistics have worked with for a while. We're basically modeling a hidden cause as a mixture. Or I mean it's a generative model, so we have to model the whole data. When we say there's a hidden cause, we don't necessarily mean that everything is caused by this. It's just that the data we see in this picture is a mixture of whatever the normal random thing going on is plus possibly some spatially localized cause that has some unknown position, unknown extent. Maybe it's a very big region. And some unknown intensity-- maybe it's causing a lot of cases or not that many. The hypothesis space maybe is best visualized like this. Each of these squares is a different hypothesis of a mixture density or a mixture model-- which is a mixture of just whatever the normal uniform process is that causes a disease unrelated to space and then some kind of just Gaussian bump, which can vary in location, size, and intensity, that is the possible hidden cause of some of these cases. And what the model that we propose says is that your sense of this spatial coincidence-- like when you look at a pattern of dots really and you see, oh, that looks like there's a hidden cluster there somewhere. It's basically you're trying to see whether something like one of those things on the right is going on as opposed to the null hypothesis of just pure randomness. So we take this log likelihood ratio, or log probability, where we're comparing the probability of the data under the hypothesis that there's some interesting hidden cause, one of the things on the right, versus the alternative hypothesis that it's just random, which is just the simple, completely uniform density. And what makes this a little bit interesting computationally is that there's an infinite number of these possibilities on the right. There's an infinite number of different locations and sizes and intensities of the Gaussian. And you have to integrate over all of them. So again, there's not going to be a whole lot of mathematical details here. But you can read about this stuff if you want to read these papers that we had here. But for those of you who are familiar with this, with working with latent variable models, effectively what you're doing is just integrating either analytically or in a simulation over all the possible models and sort of trying to compute on average how much does the evidence support something like what you see on the right, one of those cluster possibilities, versus just uniform density. And now what I'm showing you is that model compared to people's judgments on an experiment. So in this experiment, we showed people patterns like the one you just saw. The one you saw is this one here. But in the different stimuli, we varied parameters that we thought would be relevant. So we varied how many points were there total, how strong the cluster was in various ways, whether it was very tightly clustered or very big, the relative number of points in the cluster versus not. So what you can see here, for example, is it's a very similar kind of geometry, except here this is a sort of biggish cluster. And then we're making basically there's four points that look clustered and two that aren't. And in these cases, we just make the four points more tightly clustered. Here, what we're doing is we're going from having no points that look clustered to having almost all of the points looking clustered and just varying the ratio of clustered points to non-clustered points. Here, we're just changing the overall number. So notice that this one is basically the same as this one. So again, at both of these, we've got four clustered points and two seemingly non-clustered ones. And here we just scale up in set-- or scale up from four to two, to eight and four. And here we scale it down to two and one, and various other manipulations. And what you can see is that they have various systematic effects on people's judgments. So what I'm calling the data there is the average of about 150 people who did the same judgment you did-- 0 to 10. What you can see is the one I gave you was this one here. And the average judgment was almost exactly five. And if you look at the variance, it looks just like what you saw here. Some people say two or three. Some people say seven. I chose one that was right in the middle. The interesting thing is that, while you maybe felt like you were guessing-- and if you just listened to what everyone else was saying, maybe it sounds like we're just shouting out random numbers-- that's not what you're doing. On that one, it looks like it, because it's right on the threshold. But if you look over all these different patterns, what you see is that sometimes people give much higher numbers than others. Sometimes people give much lower number than others. And the details, that variation, both within these different manipulations we did and across them, are almost perfectly captured by this very simple probabilistic generative model for a latent cause. So the model here is-- this is the predictions that model I showed you is making, where again, basically, a high bar means there's strong evidence in favor of the hidden latent cause hypothesis. Some, one, or more-- some cluster-- that low bar means strong evidence for the alternative hypothesis. The scale is a bit arbitrary. And it's a log probability ratio scale. So I'm not going to comment on the scale. But importantly, it's the same scale across all of these. So a big difference is, it's the same big difference in both cases. And I don't think this is fairly good evidence that this model is capturing your sense of spatial coincidence and showing that it's not just random or arbitrary, but it's actually a very rational measure of how much evidence there is in the data for a hidden cause. Here's the same model now applied to a different data set that we actually collected a few years before, which just varies the same kinds of parameters, but has a lot more points. And the same model works in those cases, too. The differences are a little more subtle with these more points. So I'll give you one other example of this sort of thing. Like the one I just showed you, we're taking a fairly simple statistical model. This one, as you'll see, isn't even really causal. This one at least, that I showed you, is causal. The advantage of this other one is that it's both a kind of textbook statistics example, it's one where people do something more interesting than what's in the textbook. Although you can extend the textbook analysis to make it look like what people do. And unlike in this case here, you can actually measure the empirical statistic. You can go out, and instead of just like positing, here's a simple model of what a latent environmental cause would be like, you can actually go and measure all the relevant probability distributions and compare people not just with a notional model, but with what, in some stronger sense, is the rational thing to do, if you were doing some kind of intuitive Bayesian inference. So these are, again stuff that Tom Griffiths did with me, in an in and then after grad school. We asked people to make the following kind of everyday prediction. So we said, suppose you read about a movie that's made $60 million to date. How much money will it make in total? Or you see that something's been baking in the oven for 34 minutes. How long until it's ready? You meet someone who's 78 years old. How long will they live? Your friend quotes to you from line 17 of his favorite poem. How long is the poem? Or you meet a US congressman who has served for 11 years. How long will he serve in total? So in each of these cases, you're encountering some phenomenon or event in the world with some unknown total duration. We'll call that t, total. And all we know is that t, total, is somewhere between zero and infinity. We might have a prior on it, as you'll see in a second. But we don't know very much about this particular t, total except you get one example, one piece of data, some t, which we'll just assume is just randomly sampled between zero and t, total. So all we know is that whatever these things are, it's something randomly chosen, less than the total extent or duration of these events. And now we can ask, what can you guess about the total extent or duration from that one observation? Or in mathematical terms, there is some unknown interval from zero up to some maximal value. You can put a prior on what that interval is. And you have to guess the interval from one sampled point sampled randomly within it. It's also very similar-- and another reason we studied this-- to the problem of learning a concept from one example. When you're learning what horses are from one example, or when you're learning what that piece of rock climbing question is-- what's a cam-- from one example, or what's a tufa from one example. You can think, there's some region in the space of all possible objects or something, or some set out there. And you get one or a few sample points, and you have to figure out the extent of the region. It's basically the same kind of problem, mathematically. But what's cool about this is we can measure the priors for these different classes of events and compare people with an optimal Bayesian inference. And you see something kind of striking. So here's, on the top-- I'm showing two different kinds of data here. On the top are just empirical statistics of events you can measure in the world; nothing behavioral, nothing about cognition. On the bottom, I'm showing some behavioral data and comparing it with model predictions that are based on the statistics that are measured on top. So what we have in each column is one of these classes of events, like movie grosses in dollars. You can get this data from iMDB, the Internet Movie Database. You can see that most movies make $100 million or less. There's sort of a power law. But a few movies make hundreds, or even many hundreds, maybe a billion dollars even, these days. Similarly with poems, they have a power law distribution of length. So most poems are pretty short. They fit on a page or less. But there are some epic poems, or some multi-page-- many, many hundreds of lines. And they fall off with a long tail. Lifespans, movie runtimes are kind of unimodal, almost Gaussian-- not exactly. Those red curves, histograms' bars, show the empirical statistics that we measured from public data. And the red curves show just the best fit of a simple parametric model, like a Gaussian or a power law distribution that I'm mentioning. House of representatives-- how long people serve in the House has this kind of gamma, or particular gamma called an Erlang shape with a little bit of an incumbent effect. Cake baking times-- so remember we asked how long is this cake going to bake for. They don't have any simple parametric form when you go in and look at cookbooks. But you see, there's something systematic there. There's a lot of things that are supposed to bake for exactly an hour. There are some which have a smaller, or a shorter, but broad mode. And then there's a few epic 90-minute cakes out there. So that's all the empirical statistics. Now what you're seeing on the bottom is people's-- well, on the y-axis, the vertical axis, you have the average-- it's a median-- of a bunch of human predictions for the total extent of any one of these things, like your guess of the total length of a poem given that, basically, there is a line 17 in it. And on the x-axis, what you're seeing is that one data point, the one value of t, which is, all you know is that it's somewhere between zero and t, total. So different groups of subjects were given five different values. So you see five black dots, which correspond to what five different subgroups of subjects said for each of these possible t values. And then the black and red curves are the model fit, which comes from taking a certain kind of Bayesian optimal prediction, where the prior is what's specified on the top-- that's the prior on t, total. The likelihood is a sort of uniform random density. So it's just saying t is just a uniform random sample from zero up to t, total. You put those together to compute a posterior. And then you-- the particular estimator we're using is what's called the posterior median. So we're looking at the median of the exterior and comparing that with a median of human subjects. And what you can see is that it's almost a perfect fit. And it doesn't really matter whether you take the red curve, which is what comes from approximating the prior with one of these simple parametric models, or the black one, which comes from just taking the empirical histogram. Although, for the cake baking times, you really can only go for the empirical one. Because there is no simple parametric one. That's why you just see a jagged black line there. But it's interesting that it's almost a perfect fit. There are a couple-- just like somebody asked in Demis's talk-- there's one or two cases we found where this model doesn't work, sometimes dramatically, and sometimes a little bit. And they're all interesting. But I have time to talk about it. That's one of the things I decided to skip. If you'd like to talk about it, I'm happy to do that. But most of the time, in most of the cases we've studied, these are representative. And I think, again, all of the failure cases are quite interesting ones. That point to, this is one of the many things we need to go beyond. But the interesting thing isn't just that the curves fit the data, but the fact that the actual shape is different in each case. Depending on the prior of this different classes of events, you get a fundamentally different, or qualitatively different, prediction function. Sometimes it's linear. Sometimes it's non-linear. Sometimes it has some weird shape. And really, quite surprisingly to us, people seem to be sensitive to that. So they seem to predict in ways that are reflective of not only the optimal Bayesian thing to do, but the optimal Bayesian thing to do from the optimal prior, from the correct prior. And I certainly don't want to suggest that people always do this. But it was very interesting to us that for just a bunch of everyday events, and really, the places where this analysis works best are ones, again, where we think people actually might plausibly have good reasons to have the relevant experiences with these everyday events, they seem to be sensitive to both the statistics in the sense of just what's going on in the world and doing the right statistical prediction. So that's what we did. 10 years ago or so, that was like the state of the art for us. And then we wanted to know, well, OK, can we take these sorts of ideas and scale them up to some actually interesting cognitive problems, like say, for example, learning words for object categories. And we did some of that. I'll show you a little bit of that before showing you what I think was missing there. I mean, in a lot of ways, this is a harder problem. I mean, it's very similar, as I said. It's basically like, there's just like the problem I just showed you, where there was an unknown total extent or duration, and you got one random sample from it, here there is some un-- imagine the space of all possible objects-- could be a manifold or described by a bunch of knobs. I mean, these are all generated from some computer program. If these were real, biological things, they would be generated from DNA or whatever it is. But there's some huge, maybe interestingly structured, space of all possible objects. And within that space is some subset, some region or subset, somehow described that is the set of tufas. And somehow you're able to grasp that subset, more or less, if you get its boundaries, to be able to say yes or no as you did at the beginning of the lecture from just, in this case, a few points-- three points-- randomly sampled from somewhere in that region. It would work just as well if I showed you one of them, basically. So in some sense, it's the same problem. But it's much harder, because here, the space was this one dimensional thing. It was just a number. Whereas here, we don't know what's the dimensionality of the space of objects. We don't know how to describe the regions. Here we knew how to describe the regions. They were just intervals with a lower bound at zero and an upper bound at some unknown thing. And the hypothesis space of possible regions was just all the possible upper bounds of this event duration. Here we don't know how to describe this space. We don't know how to describe the regions that correspond to object concepts. We don't know how to put a price on those hypotheses. But in some work that we did-- in particular, some work that I did with Fei Xu, who is also a professor at Berkeley. We were colleagues and friends in graduate school. We sort of did what we could at the time. So we made some guesses about what that hypothesis space-- what that space might be like, what the hypothesis space might be like, how to put some priors, and so, on there. Used exactly the same likelihood, which was just this very simple idea that the observed examples are a uniform random draw from some subset of the world. And you have to figure out what that subset is. And we were able to make some progress. So what we did was we said, well, like in biology, perhaps-- and if you saw-- how many people saw Surya Ganguli's lecture yesterday morning? Cool. I sort of tailored this for assuming that you probably had seen that. Because there's a lot of similarities, or parallels, which is neat. And it's, again, part of engaging on generative models and neural networks. As you saw him do, you'll get my version of this. So also, like he mentioned, there are actual processes in the world which generate objects-- something like this. We know about evolution-- produces basically tree-structured groups, which we call species, or genus, or something like that, or just taxa, or something. There's groups of organisms that have a common evolutionary descent. That's the way a biologist might describe it. And we know, these days, a lot about the mechanisms that produce that. Even going back 100 or 200 years, say, to Darwin, we knew something about the mechanisms that produced it, even if we didn't know the genetic details, ideas of something like mutation, variation, natural selection as a kind of mechanistic account, about right up there with Newton and forces. But anyway, scientists can describe some process that generates trees. And maybe people have some intuition, just like people seem to have some intuitions about these statistics of everyday events, maybe they have some intuitions, somehow, about the causal processes in the world, which give rise to groups and groups and subgroups. And they can use that to set up a hypothesis space. And the way we went about this is, we have no idea how to describe people's internal mental models of these things, but you can do some simple-- there are simple ways to get this picture by just basically asking people to judge similarity and doing hierarchical clustering. So this is a tree that we built up by just asking people-- getting some subjective similarity metric and then doing hierarchical clustering, which we thought could roughly approximate maybe the internal hierarchy that our mental models impose on this. Were you raising your hand or just-- no. OK. Cool. We ultimately found this dissatisfying, because we don't really know what the features are. We don't really know if this is the right tree or how people built it up. But it actually worked pretty well, in the sense that we could build up this tree. We could then assume that the hypotheses for concepts just corresponded to branches of the tree. And then you could-- again, to put it just intuitively, the way you do this learning from one or a few examples, let's say that you see those few tufas over there. You're basically asking, which branch of the tree do I think-- those are randomly drawn from some internal branch of the tree, some subtree. Which subtree is it? And intuitively, if you see those things and you say, well, they are randomly drawn from some branch, maybe it's the one that I've circled. That sounds like a better bet, for example, than this one here, or maybe this one, which would include one of these things, but not the others. So that's probably unlikely. And it's probably a better bet than, say, this branch, or this branch, or these ones, which are logically compatible, but somehow it would have been sort of a suspicious coincidence. If the set of tufas had really been this branch here, or this one here, then it would have been quite a coincidence that the first three examples you saw were all clustered over there in one corner. And what we showed was that, that kind of model, where that suspicious coincidence came out from the same kinds of things I've just been showing you for the causal clustering example, and for the interval thing, it's the same Bayesian math. But now with this tree-structured hypothesis space, that was actually-- did a pretty good job of capturing people's judgments. We gave people one or a few examples of these concepts that, the examples could be more narrowly or broadly spread, just like you saw in the clustering thing, but just sort of less extensive. We did this with adults. We did this with kids. And I won't really go into any of the details. but If you're interested, check out these various Xu and Tenenbaum papers. That's the main one there. And you know, the model kind of worked. But ultimately, we found it dissatisfying. Because we couldn't really explain-- we didn't really know what the hypothesis space was. We didn't really know how people were building up this tree. And so we did a few things. We-- meaning I with some other people-- turned to other problems where we had a better idea, maybe, of the feature space and the hypothesis space, but the same kind of ideas could be explored and developed. And then ultimately-- and I'll show you this maybe before lunch, or maybe after lunch-- we went back and tackled the problem of learning concepts from examples with other cases where we could get a better handle on really knowing what the representations that people were using were, and also where we could compare with machines in much more compelling apples and oranges ways. In some sense here, there's no machine, as far as I know, that can solve this problem as well as our model. On the other hand, that's, again, it's just very much like the issue that came up when we were talking about-- I guess maybe it was with you, Tyler-- when we were talking about the deep learning-- or with you, Leo-- the deep reinforcement network. A machine that's looking at this just as pixels is missing so much of what we bring to it, which is, we see these things as three-dimensional objects. And just like the cam in rock climbing, or any of those other examples I gave before, I think that's essential to the abilities that people are doing. The generative model we build, this tree is based not on pixels, or even on ConvNet features, but on a sense of the three-dimensional objects, its parts, and their relations to each other. And so, fundamentally, until we know how to perceive objects better, this is not going to be comparable between humans and machines on equal terms. But I'll show you a little bit later some still pretty quite interesting, but simpler, visual concepts that you can still learn and generalize from one example, but where they are comparable in equal terms. But first I want to tell you a little bit about these-- yet another cognitive judgment, which like the word learning, or concept learning cases, involved generalizing from a few examples. They also involve using prior knowledge. But they're ones where maybe we have some way of capturing people's prior knowledge by using the right combination of statistical inference on some kind of symbolically structured bottle. So you can already see, as-- I mean, just sort of to show the narrative here. The examples I was giving here, this doesn't require any symbolic structure. All that stuff I was talking at the beginning, about how we have to combine statistical inference, sophisticated statistical inference, with sophisticated symbolic representations, you don't need any of that here. All the representations could just be counting up numbers or using simple probability distributions that statisticians have worked with for over 100 years. Once we start to go here, now we have to define a model with some interesting structure, like a branching tree structure, and so on. And as you'll see, we can quickly get to lots more interesting causal, compositionally-structured generative models in similar kinds of tasks. And in particular, we were looking for-- for a few years, we were very interested in these property induction tasks. So this was-- it happened to be-- I mean, I think this was a coincidence. Or maybe we were both influenced by Susan Carey, actually. So the work that Surya talked about, that he was trying to explain as a theoretician-- remember, Surya and Andrew Saxe, they were trying to give the theory of these neural network models that Jay McClelland and Tim Rogers had built in the early 2000s, around the same time we were doing this work. And they were inspired by some of Susan Carey's work on children's intuitive biology, as well as other people out there in cognitive psychology-- for example, Lance Rips, and Smith, Madine. Many, many cognitive psychologists studied things like this-- Dan Osherson. They often talked about this as a kind of inductive reasoning, or property induction, where the idea was-- so it might look different from the task I've given you before, but actually, it's deeply related. The task was often presented to people like an argument with premises and a conclusion, kind of like a traditional deductive syllogism, like all men are mortal, Socrates is a man, therefore Socrates is mortal. But these are inductive in that there is no-- you can't conclude with deductive certainty the conclusion follows from the premises or is falsified by the premise, but rather you just make a good guess. The statements above the line provide some, more or less, good or bad evidence for the statement below the line being true. These studies were often done with-- they could be done with just sort of familiar biological properties, like having hairy legs or being bigger than a breadbox. I mean, it's also-- it's very much the same kind of thing that Tom Mitchell was talking about, as you'll start to see. There's another reason why I wanted to cover this. We worked on these things because we wanted to be able to engage with the same kinds of things that people like Jay McClelland and Tom Mitchell were thinking about, coming from different perspectives. Remember, Tom Mitchell showed you his way of classifying brain representations of semantics with matrices of objects and 20-question-like features that included things like is it hairy, or is it alive, or does it eggs, or is it bigger than a car, or bigger than a breadbox, or whatever. Any one of these things-- basically, we're getting at the same thing. Here there's just what's-- often these experiments with humans were done with so-called blank predicates, something that sounded vaguely biological, but was basically made up, or that most people wouldn't know much about. Does anyone know anything about T9 hormones? I hope so, because I made it up. But some of them were just done with things that were real, but not known to most people. So if I tell you that gorillas and seals both have T9 hormones, you might think it's sort of, fairly plausible that horses have T9 hormones, maybe more so than if I hadn't told you anything. Maybe you think that argument is more plausible than the one on the right; given that gorillas and seals have T9 or hormones, that anteaters have hormones. So maybe you think horses are somehow more similar to gorillas and seals than anteaters are. I don't know. Maybe. Maybe a little bit. If I made that bees-- gorillas and seals have T9 on hormones. Does that make you think it's likely that bees have T9 hormones, or pine trees? The farther the conclusion category gets from the premises, the less plausible it seems. Maybe the one on the lower right also seems not very plausible, or not as plausible. Because if I tell you that gorillas have T9 hormones, chimps, monkeys, and baboons all have T9 on hormones, maybe you think that it's only primates or something. So they're not a very-- it's, again, one of these typicality-suspicious coincidence businesses. So again, you can think of it as-- you can do these experiments in various ways. I won't really go through the details, but it basically involves giving people a bunch of different sets of examples, just like-- I mean, in some sense, the important thing to get is that abstractly it has the same character of all the other tasks you've seen. You're giving people one or a few examples, which we're going to treat as random draws from some concept, or some region in some larger space. In this case, the examples are the different premise categories, like gorillas and seals are examples of the concept of having T9 hormones. Or gorillas, chimps, monkeys, and baboons are an example of a concept. We're going to put a prior on possible extents of that concept, and then ask what kind of inferences people make from that prior, to figure out what other things are in that concept. So are horses in that same concept? Or are anteaters? Or are horses in it more or less, depending on the examples you give? And what's the nature of that prior? And what's good about this is that, kind of like the everyday prediction task-- the lines of the poems, or the movie grosses, or the cake baking-- we can actually sort of go out and measure some features that are plausibly relevant, to set up a plausibly relevant prior, unlike the interesting object cases. But like the interesting object cases, there are some interesting hierarchical and other kinds of causal compositional structures that people seem to be using that we can capture in our models. So here, again, the kinds of experiments-- these features were generated many years ago by Osherson and colleagues. But it's very similar to the 20 questions game that Tom Mitchell used. And I don't remember if Surya talked about where these features came from, that he talked a lot about a matrix of objects and features. I don't know if he talked about where they come from. But actually, psychologists spent a while coming up with ways to get people to just tell you a bunch of features of animals. This is, again, it's meant to capture the knowledge that maybe a kid would get from maybe plausibly reading books and going to the zoo. We know that elephants are gray. They're hairless. They have tough skin. They're big. They have a bulbous body shape. They have long legs. These are all mostly relative to other animals. They have a tail. They have tusks. They might be smelly, compared to other animals-- smellier than average is sort of what that means. They walk, as opposed to fly. They're slow, as opposed to fast. They're strong, as opposed to weak. It's that kind of business. So basically what that gives you is this big matrix. Again, the same kind of thing that you saw in Surya's talk, the same kind of thing that Tom Mitchell is using to help classify things, the same kind of thing that basically everybody in machine learning uses-- a matrix of data with objects, maybe as rows, and features, or attributes, as columns. And the problem here is-- the problem of learning is to say-- the problem of learning and generalizing from one example is to take a new property, which is a new concept, which is like a new column here, to get one or a few examples of that concept, which is basically just filling in one or a few entries in that column, and figure out how to fill in the others, to decide, do you or don't you have that property, somehow building knowledge that you can generalize from your prior experience, which could be captured by, say, all the other features that you know about objects. So that's the way that you might set up this problem, which again, looks like a lot of other problems of, say, semi-supervised learning or sparse matrix completion. It's a problem in which we can, or at least we thought we could, compare humans and many different algorithms, and even theory, like from Surya's talk. And that seemed very appealing to us. What we thought, though, that people were doing, which is maybe a little different than what-- or somewhat different-- well, quite different than what Jay McClelland thought people were doing-- maybe a little bit more like what Susan Carey or some of the earlier psychologists thought people were doing-- was something like this. That the way we solve this problem, the way we bridged from our prior experience to new things we wanted to learn was not, say, by just computing the second order of statistics and correlations, and compressing that through some bottleneck hidden layer, but by building a more interesting structured probabilistic model that was, in some form, causal-- in some form-- in some form, compositional and hierarchical-- something kind of like this. And this is a good example of a hierarchical generative model. There's three layers of structure here. The bottom layer is the observable layer. So the arrows in these generative models point down, often, usually, where the thing on the bottom is the thing you observe, the data of your experience. And then the stuff above it are various levels of structure that your mind is positing to explain it. So here we have two levels of structure. The level above this is sort of this tree in your head. The idea-- it's like a certain kind of graph structure, where the objects, or the species, are the leaf nodes. And there's some internal nodes corresponding, maybe to higher level taxa, or groups, or something. You might have words for these, too, like mammal, or primate, or animal. And the idea is that there's some kind of probabilistic model that you can describe, maybe even a causal one on top of that symbolic structure, that tree, that produces the data that's more directly observable, the observable features, including the things you've only sparsely observed and want to fill in. And then you might also have higher levels of structure. Like if you want to explain, how did you learn that tree in the first place, maybe it's because you have some kind of generative model for that generative model. So here I'm just using words to describe it, but I'll show you some other stuff in a-- or I'll show you something more formal a little bit later. But you could say, well, maybe the way I figure out that there's a tree structure is by having a hypothesis-- the way I figure out that there's that particular tree-structured graphical model of this domain is by having the more general hypothesis that there is some latent hierarchy of species. And I just have to figure out which one it is. So you could formulate this as a hierarchical inference by saying that what we're calling the form, the form of the model, it's like a hypothesis space of models, which are themselves hypothesis spaces of possible observed patterns of feature correlation. And that, that higher level knowledge, puts some kind of a generative model on these graph structures, where each graph structure then puts a generative model on the data you can observe. And then you could have even higher levels of this sort of thing. And then learning could go on at any or all levels of this hierarchy, higher than the level of experience. So just to show you a little bit about how this kind of thing works, what we're calling the probability of the data given the structure is actually exactly the same, really, as the model that Surya and Andrew Saxe used. The difference is that we were suggesting-- may be right, may be wrong-- that something like this generative model was actually in your head. Surya presented a very simple abstraction of evolutionary branching process, a kind of diffusion over the tree, where properties could turn on or off. And we built basically that same kind of model. And we said, maybe you have something in your head as a model of, again, the distribution of properties, or features, or attributes over the leaf nodes of the tree. So if you have this kind of statistical model. If you think that there's something like a tree structure, and properties are produced over the leaf nodes by some kind of switching, on-and-off, mutation-like process, then you can do something like in this picture here. You can take an observe a set of features in that matrix and learn the best tree. You can figure out that thing I'm showing on the top, that structure, which is, in some sense, the best guess of a tree structure-- a latent tree structure-- which if you then define some kind of diffusion mutation process over that tree, would produce with high probability distributions of features like those shown there. If I gave you a very different tree it would produce other patterns of correlation. And it's just like Surya said, it can be all captured by the second order statistics of feature correlations. The nice thing about this is that now this also gives a distribution on new properties. So if I observe-- because each column is conditionally independent given that model. Each column is an independent sample from that generative model. And the idea is if I observe a new property, and I want to say, well, which other things have this, well, I can make a guess on using that probabilistic model. I can say, all right, given that I know the value of this function over the tree, this stochastic process, at some points, what do I think the most likely values are at other points? And basically, what you get is, again, like in the diffusion process, a kind of similarity-based generalization with a tree-structured metric, that nearby points in the tree are likely to have the same value. So in particular, things that are near to, say, species one and nine are probably going to have the same property, and others maybe less so. And you build that model. And it's really quite striking how much it matches people's intuitions. So now you're seeing the kinds of plots I was showing you before, where-- all my data plots look like this. Whenever I'm showing the scatterplot, by default, the y-axis is the average of a bunch of people's judgments, and the x-axis is the model predictions on the same units or scale. And each of these scatterplots is from a different experiment-- not done by us, done by other people, like Osherson and Smith from a couple of decades ago. But they all sort of have the same kind of form, where each dot is a different set of examples, or a different argument. And what typically varied within an experiment-- you vary the examples. And you fix constant the conclusion category. And you see, basically, how much evidential support to different sets of two or three examples gives to a certain conclusion. And it's really, again, quite striking that-- sometimes in a more categorical way, sometimes in a more graded way-- but basically, people's average judgments here just line up quite well with the sort of Bayesian inference on this tree-structured generative model. These are just examples of the kinds of stimuli here. Now, we can compare. One of the reasons why we were interested in this was to compare, again, many different approaches. So here I'm going to show you a comparison with just a variant of our approach. It's the same kind of hierarchical Bayesian model, but now the structure isn't a tree, it's a low-dimensional Euclidean space. You can define the same kinds of proximity smoothness thing. I mean, again, it's more a standard in machine learning. It's related to Gaussian processes. It's much more like neural networks. You could think of this as kind of like a Bayesian version of a bottleneck hidden layer with two dimensions, or a small number of dimensions. The pictures that Surya showed you were all higher dimensions than two dimensions in the latent space, or the hidden variable space, of the neural network, the hidden layer space. But when he compress it down to two dimensions, it looks pretty good. So it's the same kind of idea. Now what you're saying is you're going to find, not the best tree that explains all these features, but the best two-dimensional space. Maybe it looks like this. Where, again, the probabilistic model says that things which are relatively-- things that are closer in this two-dimensional space are more likely to have the same feature value. So you're basically explaining all the pairwise feature correlations by distance in this space. It's similar. Importantly it's not as causal and compositional. The tree models something about, possibly, the causal processes of how organisms come to be. If I told you that, oh, there's this-- that I told you about a subspecies, like whatever-- what's a good example-- different breeds of dogs. Or I told you that, oh, well, there's not just wolves, but there's the gray-tailed wolf and the red-tailed wolf. Red-tailed wolf? I don't know. Again, they're probably similar, but they might-- one red-tailed wolf, whatever that is, more similar to another red-tailed wolf, probably has more features in common than with a gray-tailed wolf, and probably more to the gray-tailed wolf than to a dog. The nice thing about a tree is I can tell you these things, and you can, in your mind-- maybe you'll never forget that there's a red-tailed wolf. There isn't. I just made it up. But if you ever find yourself thinking about red-tailed wolves and whether their properties are more or less similar to each other than to gray-tailed wolves, or less so to dogs, or so on, it's because I just said some things, and you grew out your tree in your mind. That's a lot harder to do in a low-dimensional space. And it turns out that, that model also fits this data less well. So here I'm just showing two of those experiments. Some of them are well fit by that model, but others are less well fit. Now, that's not to say that they wouldn't be good for other things. So we also did some experiments. This was experiments that we did. Oh, actually, I forgot to say, really importantly, this was all worked done by Charles Kemp, who's now a professor at CMU. And it was part of the stuff that he did in his PhD thesis. So we were interested in this as a way, not to study trees, but to study a range of different kinds of structures. And it is true, going back, I guess, to the question you asked, this is what I was referring to about low-dimensional manifolds. There are some kinds of knowledge representations we have which might have a low-dimensional spatial structure, in particular, like mental maps of the world. So our intuitive models of the Earth's surface, and things which might be distributed over the Earth's surface spatially, a two-dimensional map is probably a good one for that. So here we considered a similar kind of concept learning from a few examples task, where we said-- but now we put it like this. We said, suppose that a certain kind of Native American artifact has been found in sites near city x. How likely is it also to be found in sites near city y? Or we could say sites near city x and y, how about city z. And we told people that different Native American tribes maybe had-- some lived in a very small area, some lived in a very big area. Some lived in one place, some another place. Some lived here, and then moved there. We just told people very vague things that taps into people's probably badly remembered, and very distorted, versions of American history that would basically suggests that there should be some kind of similar kind of spatial diffusion process, but now in your 2D mental map of cities. So again, there's no claim that there's any reality to this, or fine-grained reality. But we thought it would sort of roughly correspond to people's internal causal generative models of archeology. Again, I think it says something about the way human intelligence works that none of us are archaeologists, probably, but we still have these ideas. And it turned out that, here, a spatially structured model actually works a lot better. Again, it shouldn't be surprising. It's just showing that actually, the way-- the judgments people make when they're making inferences from a few examples, just like you saw with the predicting the everyday events, but now in the much more interestingly structured domain, is sensitive to the different kinds of environmental statistics. There it was different power laws versus Gaussian's of cake bake-- or of movie grosses versus lifetimes or something. Here it's other stuff. It's more interestingly structured kinds of knowledge. But you see the same kind of picture. And we thought that was interesting, and again, suggests some of the ways that we are starting to put these tools together, putting together probabilistic generative models with some kind of interestingly structured knowledge. Now, again, as you saw from Surya, and as Jay McClellan and Tim Rogers worked on, you can try to capture a lot of this stuff with neural networks. The neat thing about the neural networks that these guys have worked on is that exactly the same neural network can capture this kind of thing, and it can capture this kind of thing. So you can train the very same hidden multilayer neural network with one matrix of object and features. And the very same neural network can predict the tree-structured patterns for animals and their properties, as well as the spatially-structured patterns for Native American artifacts and their cities. The catch is that it doesn't do either of them that well. It doesn't do as well as the tree-structured models do for peop-- when I say either, it doesn't do that well, I mean, in capturing people's judgments. It doesn't do as well as the best tree-structured models do for people's concepts of animals and their properties. And it doesn't do as well as the best spacial structures. But again, it's in the same spirit as the DeepMind networks for playing lots of Atari games. The idea there is to have the same network solve all these different tasks. And in some sense, I think that's a good idea. I just think that the architecture should have a more flexible structure. So we would also say, in some sense, the same architecture is solving all these different tasks. It's just that this is one setting of it. And this is another setting of it. And where they differ is in the kind of structure that-- well, they differ in the fact that they explicitly represent structure in the world. And they explicitly represent different kinds of structure. And they explicitly represent that different kinds of structure are appropriate to different kinds of domains in the world and our intuitions about the causal processes that are at work producing the data. And I think that, again, that's sort of the difference between the pattern classification and the understanding or explaining view of intelligence. The explanations, of course, go a lot beyond different ways that similarity can be structured. So one of the kind of nice things-- oh, and I guess another-- two other points beyond that. One is that to get the neural networks to do that, you have to train them with a lot of data. Remember, Surya, as Tommy pushed him on in that talk, Surya was very concerned with modeling the dynamics of learning in the sense of the optimization time course, how the weights change over time. But he was usually looking at infinite data. So he was assuming that you had, effectively, an infinite number of columns of any of these matrices. So you could perfectly compute the statistics. And another important thing about the difference being the neural network models and the ones I was showing you is that, suppose you want to train the model, not on an infinite matrix, but on a small finite one, and maybe one with missing data. It's a lot harder to get the-- the neural network will do a much poorer job capturing the structure than these more structured models. And again, in a way that's familiar with-- have you guys talked about bias-variance dilemma? So it's that same kind of idea that you probably heard about from Lorenzo. Was it Lorenzo or one of the machin learni-- OK. So it's that same kind of idea, but now applying in this interesting case of structured estimation of generative models for the world, that if you have relatively little data, and sparse data, then having a more structured inductive bi-- having the inductive bias that comes from a more structured representation is going to be much more valuable when you have sparse and noisy data. The key-- and again, this is something that Charles and I were really interested in-- is we wanted to-- like the DeepMind people, like the connectionists, we wanted to build general purpose semantic cognition, wanted to build general purpose learning and reasoning systems. And we wanted to somehow figure out how you could have the best of both worlds, how you could have a system that relatively quickly could come to get the right kind of strong constraint-inductive bias in some domain, and a different one for a different domain, yet could learn in a flexible way to capture the different structure in different domains. More on that in a little bit. But the other thing I wanted to talk about here is just ways in which our mental models, our causal and compositional ones, go beyond just similarity. I guess, since time is short-- well, I was planning to go through this relatively quickly. But anyway, mostly I'll just gesture towards this. And if you're interested, you could read the papers that Charles has, or his thesis. But here, there's a long history of asking people to make these kind of judgments, in which the basis for the judgment isn't something like similarity, but some other kind of causal reasoning. So for example, consider these things here. Poodles can bite through wire, therefore German shepherds can bite through wire. Is that a strong argument or weak? Compare that with, dobermans can bite through wire, therefore German shepherds can bite through wire. So how many people think that the top argument is a stronger one? How many people think the bottom line is a stronger one? So that's typical. About twice as many people prefer the top one. Because intuitively-- do I have a little thing that will appear? Intuitively, anyone want to explain why you thought so? AUDIENCE: Poodles are really small. JOSH TENENBAUM: Poodles are small or weak. Yes. And German shepherds are big and strong. And what about dobermans? AUDIENCE: They're just as big as German shepherds. JOSH TENENBAUM: Yeah. That's right. So they're more similar to German shepherds, because they're both big and strong. But notice that something very different is going on here. It's not about similarity. It's sort of anti-similarity. But it's not just anti-similarity. Suppose I said, German shepherds can bite through wire, therefore poodles can bite through wire. Is that a good argument? AUDIENCE: No. It's an argument against. JOSH TENENBAUM: No. It's sort of a terrible argument, right? So there's some kind of asymmetric dimensional reasoning going on. Or similarly, if I said, which of these seems better intuitively; Salmon carry some bacteria, therefore grizzly bears are likely to carry it, versus grizzly bears carry this, therefore salmon are likely to carry it. How many people say salmon, therefore grizzly bears? How many people say grizzly bears, therefore salmon? How do you know? Those who-- yeah, you're right. I mean, you're right in that's what people say. I don't know if it's right. Again, I made it up. But why did you say that, those of you who said salmon? AUDIENCE: Bears eat salmon. JOSH TENENBAUM: Bears eat salmon. Yeah. So assuming that's true, so we're told or see on TV, then yeah. So anyway, these are these different kinds of things that are going on. And to cut to the chase, what we showed is that you could capture these different patterns of reasoning with, again, the same kind of thing, but different. It's also a hierarchical generative model. It also has, the key level of the hierarchy is some kind of directed graphical structure that generates distribution on observable properties. But it's a fundamentally different kind of structure. It's not just a tree or a space. It might be a different kind of graph and a different kind of process. So to be a little bit more technical, the things I showed you with the tree and the low-dimensional space, they had a different geometry to the graph, but the same stochastic process operating over it. It was, in both cases, basically a diffusion process. Whereas to get the kinds of reasoning that you saw here, you need a different kind of graph. In one case it's like a chain to capture a dimension of strength or size, say. In the other case, it's some kind of food web thing. It's not a tree. It's that kind of directed network. But you also need a different process. So the ways-- the kind of probability model to find that out is different. And it's easy to see on the-- for example-- on the reasoning with these threshold things, like the strength properties, if you compare a 1D chain with just symmetric diffusion, you get a much worse fit people's judgments than if you'd used what we called this drift threshold thing, which is basically a way of saying, OK, I don't know. There's some mapping from strength to being able to bite through wire. I don't know exactly what it is. But the higher up you go on one, it's probably more likely that you can bite-- that you can do the other. So that provides a wonderful model of people's judgments on these kind of tasks. But that sort of diffusion process, like if it was like mutation in biology, then that would provide a very bad model. That's the second row here. Similarly, this sort of directed kind of noisy transmission process on a food web does a great way of modeling people's judgments about diseases, but not a very good way of modeling people's judgments about these biological properties. But the tree models you saw before that do a great job of modeling people's judgments about the properties of animals, they do a lousy job of modeling these disease judgments. So we have this picture emerging that, at the time, was very satisfying to us. That, hey, we can take this domain of, say, animals and their properties, or the various things we can reason about, and there's a lot of different ways we can reason about just this one domain. And by building these structured probabilistic models with different kinds of graphs structures that capture different kinds of causal processes, we could really describe a lot of different kinds of reasoning. And we saw this as part of a theme that a lot of other people were working on. So this is-- I mentioned this before, but now I'm just sort of throwing it all out there. A lot of people at the time-- again, this is maybe somewhere between 5 to 10 years ago-- more like six or seven years ago-- we're extremely interested in this general view of common sense reasoning and semantic cognition by basically taking big matrices and boiling them down to some kind of graph structure. In some form, that's what Tom Mitchell was doing, not just in the talk you saw, but remember, he said there's this other stuff he does-- this thing called NELL, the Never Ending Language Learner. I'm showing a little glimpse of that up there from a New York Times piece on it in the upper right. In some ways, in a sort of at least more implicit way, it's what the neural networks that Jay McClelland, Tim Rogers, Surya were talking about do. And we thought-- you know, we had good reason to think that our approach was more like what people were doing than some of these others. But I then came to see-- and this was around the time when CBMM was actually getting started-- that none of these were going to work. Like the whole thing was just not going to work. Liz was one of the main people who convinced me of this. But you could just read the New York Times article on Tom Mitchell's piece, and you can see what's missing. So there's Tom, remember. This was an article from 2010. Just to set the chronology right, that was right around-- a little bit after Charles had finished all that nice work I showed you, which again, I still think is valuable. I think it is capturing something about what's going on. It was very appealing to people, like at Google, because these knowledge graphs are very much like the way, around the same time, Google was starting to try to put more semantics into web search-- again, connected to the work that Tom was doing. And there was this nice article in the New York Times talking about how they built their system by reading the web. But the best part of it was describing one of the mistakes their system made. So let me just show this to you. About knowledge that's obvious to a person, but not to a computer-- again, it's Tom Mitchell himself describing this. And the challenge of, that's where NELL has to be headed, is how to make the things that are obvious to people obvious to computers. He gives this example of a bug that happened in NELL's early life. The research team noticed that-- oh, let's skip down there. So, a particular example-- when Dr. Mitchell scanned the baked goods category recently, he noticed a clear pattern. NELL was at first quite accurate, easily identifying all kinds of pies, breads, cakes, and cookies as baked goods. But things went awry after NELL's noun phrase classifier decided internet cookies was a baked good. NELL had read the sentence "I deleted my internet cookies." And again, think of that as, it's kind of like a simple proposition. It's like, OK. But the way it parses that is cookies are things that can be deleted, the same way you can say horses have T9 hormones. It's basically just a matrix. And the concept is internet cookies. And then there's the property of can be deleted, or something like that. And it knows something about natural language processing. So it can see-- and it's trying to be intelligent. Oh, internet cookies. Well, maybe like chocolate chip cookies and oatmeal raisin cookies, those were a kind of cookies. Basically, that's what it did. Or no, actually did the opposite. [LAUGHS] It said-- when it read "I deleted my files," it decided files was probably a baked good, too. Well, first it decided internet cookies was a baked good, like those other cookies. And then it decided that files were a baked goods. And it started this whole avalanche of mistakes, Dr. Mitchell said. He corrected the internet cookies error and restarted NELL's bakery education. [LAUGHS] I mean, like, OK. Now rerun without that problem. So the point, the lesson Tom draws from this, and that the article talks about, is, oh, well, we still need some assistance. We have to go back and, by hand, set these things. But the key thing is that, really-- I think the message this is telling us is no human child would ever make this mistake. Human children learn in this way. They don't need this kind of assistance. It's true that, as Tom says, you and I don't learn in isolation either. So, all of the things we've been talking about, about learning from prior knowledge and so on, are true. But there's a basic kind of common sense thing that this is missing, which is that at the time a child is learning anything about-- by the time a child is learning anything about computers, and files, and so on, they understand well before that, like back in early infancy, from say, work that Liz has done, and many others, that cookies, in the sense of baked goods, are a physical object, a kind of food, a thing you eat. Files, email-- not a physical object. And there's all sorts of interesting stuff to understand about how kids learn that a book can be both a no-- a novel is both a story and it's also a physical object, and so a lot of that stuff. But there's a basic common sense understanding of the world as consisting of physical objects, and for example, agents and their goals. You heard a little bit about this from us, from me and Tomer, on the first day. And that's where I want to turn to next. And this is just one of many examples that we realized, as cool as this system is, as great as all this stuff is, just trying to approach semantic knowledge and common sense reasoning as some kind of big matrix completion without a much more fundamental grasp of the ways in which the world is real to a human mind, well before they're learning anything about language or any of this higher level stuff, it was just not going to work, in the same way that I think if you want to build a system that learns to play a video game, even remotely like the way a human does, there's a lot of more basic stuff you have to build on. And it's the same basic stuff, I would argue. A cool thing about Atari video games is that, even though they were very low resolution, very low-bit color displays, with very big pixels, what makes your ability to learn that game work is the same kind of thing that makes the ability, even as a young child, to not make this mistake. And it's the kind of thing that Liz and people in her field of developmental psychology-- in particular, infant research-- have been studying really excitingly for a couple of decades. That, I think, is as transformative for the topic of intelligence in brains, minds, and machines as anything. So that's what motivated the work we've been doing in the last few years and the main work we're trying to do in the center. And it also goes hand-in-hand with the ways in which we've realized that we have to take what we've learned how to do with building problematic models over interesting symbolically-structured representations and so on, but move way beyond what you could call-- I mean, we need better, even more interesting, symbolic representations. In particular, we need to move beyond graphs and stochastic processes defined over graphs to programs. So that's where the probabilistic programs come back into the mix. So again, you already saw this. And I'm trying to close the loop back to what we're doing in CBMM. I've given you about 10 to 15 years of background in our field of how we got to this, why we think this is interesting and important, and why we think we need to-- why we've developed a certain toolkit of ideas, and why we think we needed to keep extending it. And I think, as you saw before, and as you'll see, this also, in some ways-- I think we're getting more and more to the interesting part of common sense. But in another way, we're getting back to the problems I started off with and what a lot of other people at this summer school have an interest in, which is things like much more basic aspects of visual perception. I think the heart of real intelligence and common sense reasoning that we're talking about is directly connected to vision and other sense modalities, and how we get around in the world and plan our actions, and the very basic kinds of goal social understandings that you saw in those little videos of the red and blue ball, or that you see if you're trying to do action recognition and action understanding. So in some sense, it's gotten more cognitive. But it's also, by getting to the root of our common sense knowledge, it makes better contact with vision, with neuroscience research. And so I think it's a super exciting development in what we're doing for the larger Brains, Minds, and Machines agenda. So again, now we're saying, OK, let's try to understand the way in which-- even these kids playing with blocks, the world is real to them. It's not just a big matrix of data. That is a thing in their hands. And they have an understanding of what a thing is before they start compiling lists of properties. And they're playing with somebody else. That hand is attached to a person, who has goals. It's not just a big matrix of rows and columns. It's an agent with goals, and even a mind. And they understand those things before they start to learn a lot of other things, like words for objects, and advanced game-playing behavior, and so on. And when we want to talk about learning, we still are interested in one-shot learning, or very rapid learning from a few examples. And we're still interested in how prior knowledge guides that, and how that knowledge can be built. But we want to do it in this context. We want to study in the context of, say, how you learn how magnets work, or how you learn how a touchscreen device works-- really interesting kinds of grounded physical causes. So this is what we have, or what I've come to call the common sense core. Liz, are you going to talk about core knowledge at all? so there's a phrase that Liz likes to use called core knowledge. And this is definitely meant to evoke that. And it's inspired by it. I guess I changed it a little bit, because I wanted it to mean something a little bit different. And I think, again, to anticipate a little bit, the main difference is-- I don't know. What's the main difference? The main difference is that, in the same way that lots of people look at me and say, oh, he's the Bayesian guy, lots of people look at Liz and say, oh, she's the nativist gal or something. And it's true that, compared to a lot of other people, I tend to be more interested, and have done more work prominently associated with Bayesian inference. But by no means do I think that's the whole story. And part of what I tried to show you, and will keep showing you, is ways in which that's only really the beginning of the story. And Liz is prominently associated, and you'll see some of this, with really fascinating discoveries that key high level concepts, key kinds of real knowledge, are present, in some sense, as early as you can look, and in some form, I think, very plausibly, have to be due to some kind of innately unfolding genetic program that builds a mind the same way it builds a brain. But just as we'll hear from her, that's, in some ways, only the beginning, or only one part of a much richer, more interesting story that she's been developing. But for that, among other reasons, I'm calling it a little different. And I'm trying to emphasize the connection to what people in AI call common sense reasoning. Because I really do think this is the heart of common sense. It's this intuitive physics and intuitive psychology. So again, you saw us already give an intro to this. Maybe what I'll just do is show you a little bit more of the-- well, are you going to talk about the stuff at all? LIZ SPELKE: I guess. Yeah. JOSH TENENBAUM: Well, OK. So this is work-- some of this is based on Liz's work. Some of this is based on Renée Baillargeon, a close colleague of hers, and many other people out there. And I wasn't really going to go into the details. And maybe, Liz, we can decide whether you want to do this or not. But what they've shown is that, even prior to the time when kids are learning words for objects, all of this stuff with infants, two months, four months, eight months-- at this age, kids have, at best, some vague statistical associations of words to kinds of objects. But they already have a great deal of much more abstract understanding of physical objects. So I won't-- maybe I should not go into the details of it. But you saw it in that nice video of the baby playing with the cups. And there's really interesting, sort of rough, developmental timelines. One of the things we're trying to figure out in CBMM is to actually get much, much clearer picture on this. But at least if you look across a bunch of different studies, sometimes by one lab, sometimes up by multiple labs, you see ways in which, say, going from two months to five months, or five months to 12 months, kids seem to-- their intuitive physics of objects is getting a little bit more sophisticated. So for example, they tend to understand-- in some form, they understand a little bit of how collisions conserve momentum, a little bit, by five months or six months-- according to one of Baillargeon's studies-- in the sense that if they see a ball roll down a ramp and hit another one, and the second one goes a certain distance, if a bigger object comes, they're not too surprised if this one goes farther. But if a little object hits it, then they are surprised. So they expect a bigger object to be able to move it more than a little object. But a two-month-old doesn't understand that. Although a two-month-old does understand-- this is, again, from Liz's work-- that if an object is colluded by a screen, it hasn't disappeared, and that if an object is rolling towards a wall, and that wall looks solid, that the object can't go through it, and that if it somehow-- when the screen is removed, as you see on the upper left-- appears on the other side of the screen, that's very surprising to them. I think-- I'm sure what Liz will talk about, among other things, are the methods they use, the looking time methods to reveal this. And I think there's really-- this is one of the two main insights that I, and I think our whole field, needs to learn from developmental psychology, is how much of a basic understanding of physics like this is present very early. And it doesn't matter whether it's-- in some sense, it doesn't matter for the points I want to make here, how much or in what way this is innate, or how the genetics and the experience interact. I mean, that does matter. And that, that's something we want to understand, and we are hoping to try to understand in the hopefully not-too-distant future. But for the purpose of understanding what is the heart of common sense, how are we going to build these causal, compositional, generative models to really get at intelligence, the main thing is that it should be about this kind of stuff. That's the main focus. And then the other big insight from developmental psychology, which has to do with how we build this stuff, is this idea sometimes called the child as scientist. The basic idea is that, just as this early commonsense knowledge is something like a scientific theory, something like a good scientific theory, the way Newton's laws are a better scientific theory than Kepler's laws because of how they capture the causal structure of the world in a compositional way. That's another way to sum up what I'm trying to say about children's early knowledge. But also, the way children build their knowledge is something like the way scientists build their knowledge, which is, well, they do experiments, of course. We normally call that play. That's one of Laura's Schulz's big ideas. But it's not just about the experiments. I mean, Newton didn't really do any experiments. He just thought. And that's another thing you'll hear from Laura, and also from Tomer, is that a lot of children's learning looks less like, say, stochastic gradient descent, and more like scratching your head and trying to make sense of, well, that's really funny. Why does this happen here? Why does that happen over there? Or how can I explain what seemed to be diverse patterns of phenomena with some common underlying principles, and making analogies between things, and then trying out, oh, well, if that's right, then it would make this prediction. And the kid doesn't have to be conscious of that the way scientists maybe are. That process of coming up with theories and considering variations, trying them out, seeing what kinds of new experiences you can create for yourself-- call them an experiment, or call them just a game, or playing with a toy, but that dynamic is the real heart of how children learn and build the knowledge from the early stages to what we come to have as adults. Those two insights of what we start with and how we grow, I think, are hugely powerful and hugely important for anything we want to do in capturing-- making machines that learn like humans, or making computational models that really get at the heart of how we come to be smart. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_14_Neural_Mechanisms_of_Recognition_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at OCW.MIT.edu. JAMES DICARLO: I'm going to shift more towards this decoding space than we talked about, the linkage between neural activity and behavioral report. And I introduced that a bit. You just saw that there's some population powerful activity in IT. And I'm going to expand on that a bit here. But sort of stepping back, when you think about it again, what I call an end to end understanding, going from the image all the way to neural activity to the perceptual report, one of the things we want to do, again, is just define a decoding mechanism that the brain uses to support these perceptual reports. Basically what neural activity are directly responsible for these tasks? And I'll come back to later this encoding side. It's like, you know, and notice I'm putting these in this order, right? So once you know what the relevant aspects of neural activity are in IT, or wherever you think they are, then that sets a target for what is the image to neural transformation that you're trying to explain? Not predict any neural response, but those particular aspects of the neural response. So that's what I mean by the relevant ventral stream patterns of activity. So we start here. We work to here, and then we work to here, rather than the other way around. OK, so I'm going to try it again. Keep with the domain I set up. I talked about core recognition. I now need to start to define tasks. I'm going to talk about specific tasks that are, for now, let's call them basic level nouns. I'm actually going to relax that to subordinate tasks in a minute. But here they are. Car, clock, cat. These are not the actual nouns. I'll show you the ones we use. But just to fix ideas, we're imagining a space of all possible nouns that you might use to describe what you just saw. And I'm going to have a generative image domain. So I now have a space of images here. I'm not just going to draw these off the web. We're going to generate our own image domain that we think engages on the problem, but gives us control of the latent variables. So I'll show you that now. So the way we're going to do this is by generating one foreground object in each image that we're going to show. And we just did this by taking 3-D models like these-- this is a model of a car. We can control it's other latent variables beyond its identity. So this is a car. It has a particular car type. So there's a couple of latent variables about identity here that relate to the geometry. Then there's these position-- other latent variables like position, size, and pose that I mentioned, that are unknowns that make the problem challenging. And we can then just, like, render this thing. And we could place it on any old background we wanted to. And what we did was we tended to place them on uncorrelated naturalistic backgrounds. And that creates these sort of weirdish looking images. Some of them may look sort of natural, hence, this looks pretty unnatural. But the reason we did this. Why would you do this? So-- so we did this because we could add a generative space. And because it was-- so we know what's going on with the latent variables we care about. And we also, when we built this, it was challenging for computer vision systems to deal with this, even though humans could naturally-- you know, they don't have advantage of any contextual cues here because by construction, these are uncorrelated. We just took natural images and would randomly put objects on them. But this was enough to fool a lot of the computer vision systems at the time that tended to rely on the contextual cues. Like blue in the background signals or being an airplane, we didn't want those kind of things being done. We wanted the actual extraction of object identity. And again, humans could do it quite well. So that's why we ended up in this sort of maybe this no man's land of image space, which is not very simple, but not ImageNet just pulled off off the web. And so that's how we got there. And just to give you a sense that this is actually quite doable for humans, I'll show you a few images. I won't even cue you what they are. I'm going to show them for 100 milliseconds. You can kind of shout out what object you see. AUDIENCE: Car. AUDIENCE: [INAUDIBLE] JAMES DICARIO: Right. So see, it's pretty straightforward, right? And those look weird, you can do that quite well. And you know, here's the kind of images that we would generate. This would be-- so when we think of image bags, we think of partitions of image space. This is some images that would correspond to faces. These are all images of faces under some transformations. Again, different backgrounds. These are not faces. These are other objects again, under transformations. And we can have as many of these as we want. We call this one-- this distinction, when shown for 100 milliseconds-- is one core recognition test. Discriminate face for not face. Here is a subordinate task. This is beetle from not beetle. This is a particular type of car. You can see it's more challenging. Again, we don't show these images like this. This is just to show you the set. We show them one at a time. And so let me now go ahead and say, we're going to try to make a predictive model using that kind of image space to see if we can understand what are the relevant aspects of neural activity that can predict human report on an image space? And when I say we, I mean Naiib Maiai and Ha Hong, who are post-doc and graduate student that were in the lab that led this experimental work. And Ethan Soloman and Dan Yamins also contributed to the work. So what we did was to try to record a bunch of IT activity to measure what's going on in the population as I showed you earlier, but now in this more defined space where we're going to collect a bunch of human behavior to compare possible ways of reading IT with the behavior of the human. This is how we started. We're now doing monkeys-- where we're recording and the monkey's doing a task. But what we did here was we just passively fixating monkeys, compared with behaving humans. And as I showed you earlier, monkeys and humans have very similar patterns of behavior. So what we record from IT, in this case, we were using array recording electrodes. These are chronically implanted. This shows them here. You implant them during a surgery, as kind of is shown here. Down in the IT cortex. You can get their size here. There are about hundred-- there's actually 96 electrodes on each of them. They typically yield about half of the electrodes having active neurons on them. So you get, you know, on the order of 150 recording sites. And you can lay them out. You can lay-- we would typically lay out three of them across IT and V4 to record a population sample out of IT. And we would do this across among multiple monkeys. And here's an example of the kind of data we would get. This is 168 IT recording sites. This is similar to what I showed you earlier. This is the mean response in a particular time window out of IT, similar to what I showed you earlier in that study with Gabriel. And what we do here is, I'm just showing you to give you feel. That's one image. Here is eight more-- here's seven more images. And these are just the population vectors in a graphic form. And but we actually collected nearly 25-- this is 2,560 images. This is sort of the mean response data of this 168 neurons. And now you have this again, this rich population data. And you can ask, what's available in there to support these tasks? And how well does it predict human patterns of performance on those tasks? So in this study, that's all we were asking to do. We're trying to do more and more recently. But let me show you what all we were trying to do is to say, look. One thing we observed, even though you saw that car-- you could do car, you could do faces. It seemed like you were doing 100%. Turns out you're better at some things than others. So discriminate-- this is a deep prime map of humans. So red means good performance. High D prime. You know, a D prime of 3 is something like-- I don't know, psychophysicists in the room may correct me. A D prime of 3 is sort of on the order of 90 some 95% correct, in that range. So these are very high performance levels when you get up to 5. 0 is chance. So 50%-- well this is an eight way task. So one over 8% correct. So the subjects were doing either eight way basic level tasks, or eight way subordinate cars, or eight way faces. And these are the D prime levels under different amounts of variation of those other latent variables position size and pose. Don't worry about those details. What I want you to see is the color here. So look, it's tables versus-- discriminating tables from all these other objects. You do that at a very high D prime. Discriminating beetles from other cars, you do it at slightly lower D prime. You can see this, specially at a high variation, you're actually starting to get down to lower performance. And faces-- one face versus another face, you're actually quite poor at that. You're a little bit better than chance. But it's actually quite challenging in 100 milliseconds without hair and glasses to discriminate those 3-D kind of face models. I showed you Sam and Joe earlier as examples. You're actually quite challenging to do that for humans in that domain of faces. So, what I want to show you here is you have this pattern of behavioral performance. You have all this IT activity. This is humans. This is monkeys. And what we wanted to do is say, look. We can use this pattern. This is very repeatable across humans. Can we use this repeatable behavioral pattern to understand what aspects of this activity could map to that? And again, this pattern is reliable. I just said that. And it's not as if you can predict this pattern by just running classifiers on pixels or V1. In fact, I'll show you that a minute. But we thought there's some aspects of IT activity that would predict this. And we wanted to try to find those aspects to-- so, again, this was motivated by that study I showed you earlier. So which part of the IT population activity could predict this behavior over all recognition tasks? We're seeking a general decoding model that would work. Here's some specific tasks. But we'd like it to be-- work over any task that we could imagine testing humans within this domain of taking 3D models, putting them under variation. Work over that entire domain. That was what we were hoping to do. So again, I'll briefly take you through this. Because I already showed you this earlier. Again, we've previously shown that you could kind of take this kind of state space, and say hey, can you separate images of faces from non-faces, using these simple linear classifiers, which are essentially weighted sums on the IT activity? And now we wanted to ask, could this predict human behavioral face performance, and monkey, because again, they're very similar. And not only would this class of decoding models that was motivated by the earlier work predict this task, but would predict car detection? Would the same model predict car one versus car two? That's a subordinate task. And all such tasks. Again, over the whole domain, can you take a same decoding strategy and take the data and say, I'm going to just learn on a certain number of training examples, build a classifier, and then I'll say that's my model of how the human does every one of these tasks. And if that's true, then it should perfectly predict that pattern or performance that I just showed you earlier. And so here was again, this was the working hypothesis. Passively evoked spike rates using single fixed time scale that are spatially distributed, because they're sampled over IT, over a single fixed number of non-human-- of non-human primate cortex. So a single number of neurons. And learn from a reasonable number of training examples. So all of that is a decoding class of models that we thought might work. And if this is correct-- this is what I just said-- it should predict the behavioral data that we collect. For example, the D prime data I just showed you. But also more fine grained behavioral data in principle. So I want to just step back to make it clear that it's not obvious that this should work, right? I mean, it depends-- in the audience, I get people on completely different sides of this, whether this should work or not. So, you know, one thing is, like, well look, it's passively evoked. You heard Gabriel say, well, you didn't like passive tasks. And I agree with that. In the ideal world, the animal will be actively doing the task. And then you'd say, well I'll measure while the animal's doing the task. That's going to be your best chance of prediction. But we also saw earlier that that passively evoked monkey still-- you know, nobody would argue that a passively evoked retinal data is not going to be somewhat applicable to vision. And you know, the question is, how much of those arousal effects show up in a place like IT cortex, which is typically high? Which is high up in the ventral stream. So you could argue both sides of this. But it's possible that attentional arousal mechanisms are needed to make this a good predictive linkage between that to sort of activate IT in this sort of crude way, if you like. Some people have pointed out that you need the trial by trial coordinated spike timing structure to actually make good predictions, that those are critical. Some people have pointed out that you have to kind of assign different parts of IT to particular roles, which is a prior on the decoding space. For instance, that you could believe that biologically, an animal's born. There's some tissue that's going to be dedicated to faces. You have to wire those neurons downstream to that tissue. And that means you're going to restrict the decoding space, rather than just letting them learn from the space of IT as if they collected samples off of all of IT. So I think some people implicitly believe that even if it's not stated quite that way. IT does not directly underlie recognition. You could imagine that. I mean, it's not for sure known. And some lesions of IT don't produce deficits in recognition. That's a possibility. Maybe you need too many training examples. Monkey neural codes cannot explain human behavior. You know, again, but I already showed you monkeys and humans are very similar. So these are the reasons that you might say this is negative, and might not work. And probably already have guessed that I'm telling all these negatives because it turns out this simple thing works quite well for the grain of behavior that I've shown you so far. And here's my evidence of that. So this is actual behavioral performance out of humans that I showed you earlier. This is mean D prime. This is the predicted behavior or performance of taking a classifier, reading from that IT population data that I've shown you, which gives a predicted D prime. Here is-- we first chose a decoder. We had to match things like the number of neurons. We had to get it in the ballpark, so-- because again, there's a free variable, as I showed you earlier. There's at least one. But for now, let's think of matching the number of neurons to get you near the diagonal, so that you have sufficient number of neural recordings to say, how well do you do on a face detection task? And then, here's all the other tasks. This is those 64 points that I showed you earlier. Here's some examples like fruit versus other things, car versus other things. And you should see that all these points kind of line up along this diagonal, which says, wow, this is actually quite predictive, that I can take this simple thing and predict all the stuff that we've collected so far. And so let me now kind of be more concrete about what is the inferred neural mechanism that we're testing here? Well, I'll show you in a minute. This is, for each new object, we think what happens is some downstream observer, a downstream neuron, randomly samples roughly 50,000 single neurons, spatially distributed over all of IT, not biased to any compartments. Listens to each IT sites. When I say listen in this case, we think could average over 100 milliseconds. We're not sure about this. This is just the version that's shown here. Learn an appropriate weighted sum of those IT spiking. And then listen at 10%. That's basically, once you learn, there's a heavily weighted about 10% of the IT neurons are heavily weighted for each of the tasks. That's just an observation that we have in our data. But this is trying to map it to neuroscientist language from these decoder versions out of IT. So what that is a model that says, learn weighted sums of 50,000 random average 100 milliseconds single unit responses distributed over all IT. So a bunch of stuff in here is what your model is sort of encapsulating. That's still too long. So I made a little acronym out of that. And that caught Laws of RAD IT decoding mechanism. So this is just to say there's a hypothesis of how everything might work, but now can be make predictions for other objects and could potentially be falsified. So, so far, this model works quite well over these tasks. And in fact, the correlation is 0.92. You might look at this and say, oh, it's not perfect. But it turns out that that's about the level that which humans differ from each other. So it's passing a Turing test, that this mechanism read off of the monkey IT hides in the distribution of the human population that we're asking to also perform these same tasks. So it can't be distinguished from being a human in these tasks. You guys, watch "X Machina?" Wasn't that a movie I saw? Doesn't pass that test. Passes just a simple core recognition test. But so that was a Turing test of this. So OK, so, this is here that I quantified. So this is human to human consistency. That's the range I just mentioned that, you've got to get into here to pass our Turing test on this. And that's a decoding mechanism I just showed you. There's other ways of reading out of IT that don't pass. There's ways of reading out of V4, which you recorded from-- none of them we've tried are able to get you to this here. That doesn't mean V4 isn't involved. V4 is the feeder to IT. It just means you can't take simple decodes off of V4 and naturally produces this pattern. And that's similar for like, pixels or V1 representations. So lower level representations don't naturally predict this pattern of behavior. And even some computer vision codes that we tested at the time, as you can see, if those of you know these older computer vision models didn't do this. But more recent computer vision models actually do. And I'll show you that at the end. OK. So, this is a little bit for the aficionados to tell you how we got there as we increase the number of units in IT, that drives performance up. So as you read more and more units out of IT, you get better and better performance. That's also true out of V4. But I'm trying to show you this here, is it's like, not the absolute performance that is the good thing to compare a model with actual behavioral data. It's the pattern of performance, which we call the consistency with the humans. That's that correlation along that diagonal that I showed you earlier, that tasks that are hard for the models are also hard for the humans. Tasks that are easy for humans are also easy for the models. And you could imagine doing that, not just at the task level, but at the image level as well. And anyway, that's what's quantified here. And you see that when you get up to around you know, about 100-- I showed you 168 recordings out of IT. This point right there is about 500 IT features. And taking you through some things that maybe I won't have time for, that's actually how we approximate that 50,000 single IT neuron number. That's an inference from our data based on if we didn't actually record 50,000 single neurons. But from these kind of plots, we're able to make a pretty good guess that this kind of model right here would produce-- would land right there. To be consistent with humans, and would get the absolute level of performance which humans matched. And you know, the models we tried out of V4, this is one example of them. They can get performance. But they can never-- they don't match this pattern of performance naturally. They over perform on some tasks, and under-perform on others. They sort of reveal themselves as not being human like by being too good at some things, right? So that's a way to fail the Turing test. OK. Maybe I'll skip through this, it's sort of the same thing. This is about training examples. If those of you guys care about this, I could kind of take you through how we-- there's actually a family of solutions in there. And I'm just telling you about one of them for simplicity. So, let me then just take it down to another grain. So that was the pattern of performance, it's actually naturally predicted by this first decoding mechanism that we tried. But what about the confusion pattern? So not just the absolute D primes for each of these tasks, but there's finer grained data, like how often an animal is confused with a fruit, or an animal's confused with a face. These are the confusion pattern data here. I'm sorry I don't have the color bars up. All I'm going to need you to do is say, well these are the confusion patterns that we predicted. And this is what is the predicted confusion pattern, if I gave the machine, the IT, these ground truth labels. And it predicts this. This is what actually happened in human data. And what I want to sort of look at this and this, and say, there actually look quite similar. Their noise corrected correlation is 0.91. So they were still quite good at predicting confusion patterns. Although this did not hold up fully. We're only at 0.68. I say only. Some people would say this is success. We're only at 0.68 on high variation. So there's a failure here of the model. That should be at 1, because it's noise corrected. So there's something about this that's not quite right at predicting the confusion patterns of humans at high variation images. And that to us, that's an opening to push forward, right? So this is a strategy going forward as we have an initial guess of how you read out of IT. It looks pretty good for first grain test. But now we can turn the crank harder. We need more neural data. We need more psychophysics, finer grained measurements to sort of distinguish among, not just say IT's better than V4 or those other representations. But what exactly about the IT representation? Is it 100 milliseconds? What time scale? Maybe those synchronous codes do matter. Some of those things that I put on there earlier might start to matter when we push the code-- push this even further. So what I take home here is that you do quite well with this first order rate code reads out of IT. But now there's an opportunity to try to dig in and say, well at what point do they break down? And what kind of decoding models are you going to replace them with? And that's what we're trying to do. I've told you that IT does good at identity. But remember I said earlier on, remember I showed you those manifolds, and said there's other latent variables like position and scale. And I said those don't get thrown away. They just get unwrapped, right? Remember that manifold picture I showed earlier? And so one of the things we've been doing recently is asking, because we built these images, we know these other latent variables, like position and pose-- that was one of the advantages of building the images this way. And we've been asking how well IT encodes those other latent variables about the pose of the object, the position of the object. And to make-- let me just skip through. To make a long story short, IT actually encodes-- not only has information about these kind of variables, which is really not surprising, because others have shown that there's information about those kind of things before. But that's sort of what's on here. Everything what I'm showing here, here's IT V4 simulated V1 in pixels. And always, everything goes up along the ventral stream for the other variables, which may be non-intuitive to some of you. I mean, because position is supposed to be V1. But position of an object in a complex background is better at IT. That's one example. But all these latent variables go up along the ventral stream in terms of their ease of decoding. But what I'm most excited about is that if you do this comparison with humans again, you actually get this sort of, again, pretty decent, not quite as tight correlation, between the human-- actual measured behavioral performance on making estimates of those other latent variables, and the predicted behavioral performance out of IT. And again, much better correlations. It's not perfect. So again, there's some gap here, some failure of understanding. But much better than if you read out of V4, V1 or pixel. So this says that the representation again isn't just an identity thing. It seems like this could be representational underlie some of these other judgments, at least at the central 10 degrees for sort of foreground objects as we've been measuring here. That's the-- don't worry about the details on here-- that's the upshot of what I'm trying to say with this slide. But I just wanted to put that out there so you didn't forget that you haven't thrown away all this other interesting stuff about what's out there in the scene. OK. Let me kind of-- I've sort of alluded to this a bit. I want to come back to kind of now, this is like Marr level 3 stuff, right? So you have this idea of what you're trying to solve. You have a decode-- you have an algorithm that's a decoder on a basis, that's trying-- that looks like it predicts pretty well. It's not perfect. There's work to be done there. But it actually does quite well. Now what does that mean on the physical hardware level? So that's Marr level 3. So you think-- here's how I visualize it. You have IT cortex, which I mean AIT and CIT. So it's about 150 square millimeters in a monkey. And remember I told you there was about 1 millimeter scale of organization? I showed you that earlier. And others have shown-- I showed this earlier, too-- that there's sort of face regions. So I've drawn them just for sort of for scale here, just a schematic. That they're slightly bigger organizations, they're 2 to 5 millimeter. So I think of IT as being this sort of like 100 to 200 little-- similar to Tanaka. This is not a new conceptual idea. But there's sort of just the simple version would be each millimeter does exactly the same thing, is a feature. And if you sample off of that, you take 5,000 neurons, but they're really sampling from only about 150 IT features at 1 millimeter scale. Remember, I don't know if you caught that. But I showed 150-- 101-- 150. I showed you 168 IT neurons predicted the pattern of human performance. I showed that a few slides ago. But I told you the real number of neurons is probably 50,000. Most of those are redundant copies of that 168 dimensional feature set. That's how we think about it. So you could imagine, it's just a redundant set of about-- I like to think of about 100 features in IT which are sampled maybe randomly downstream neurons that are then learned. So when you learn faces versus other things, hey, there's lots of good information about faces versus other things. And these face patches, that's how they're defined. But those neurons are going to lean heavily-- this downstream neuron is going to lean heavily on those neurons. And then these-- so that would make these regions causally involved. So that doesn't mean you had to pre-build in anything here. You just learn this at a downstream version. And you would get something that looks like it would explain our data. So we like that, because it captures that case. But it also captures the more general case. If you learn cars, you're going to sample from a different subset of neurons. But you're following the same learning rule. That's what I said earlier on. So you end up-- we think this is the initial state. This is when you learn objects. And so what we think is a post learning, what you have is again, about 100 to 150 IT sub regions, each at 1 millimeter scale, that are supporting a number of noun tasks read off this common basis here. That's the model that we like, given the kind of data that I've been showing you. The post learning model, as we call it. So the reason I'm bringing this up is probably for the neuroscientists to fix ideas about how we think about IT as a basis set. And this is-- I think Haim sort of set this up nicely, he sort of implied similar things. That somebody downstream reads from it. OK. But now, we have a more-- you know, we're starting to have a more concrete model, that we now, I'm trying to start to be physical about it, about the size of these regions connecting to earlier data, how many there are. So we're gaining inference on that from these different experiments. And now, if you believe this, it starts to make a prediction of what's-- now we can do causality, right? Somebody mentioned that earlier. And so, one of the things we've been doing recently is if we can start to silence-- look, the way I've drawn this, this bit of tissue for-- this is just schematic-- is somehow involved in this task and that task. Face task and car task. But this bit of tissue, only face task. And that bit of tissue, only car task. And this bit of tissue, neither. So if you believe that, you had the tools, you should be able to go in and start to silence little bits of IT. And you should get predictable patterns out of the behavioral deficits of the animal when you make those manipulations, right? Everybody follow that? Right? OK. And now the models give you a framework to build those predictions and to also estimate the magnitude of those effects that you should see. And so that's what we've been doing more recently. And I'll just give you a taste of this, because this is really ongoing. But I think it connects to what Gabriel said earlier about now there are these tools available to do that. Oh, I put that in from an earlier talk where-- I think Google has a thing called Inception. And I don't know-- was it Google? Or somebody has it-- you can't do Inception unless you're actually in a brain. So are you going to try to insert-- the reason we do this is my student that is working on it really wants to inject signals in the brain. There's a dream about VMI, right? Could you kind of inject a percept? And to do that, you're going to need to do experiments like this. And you understand this hardware to interact with it. It's something we talked about earlier. So actually-- and Tonegawa's lab has some cool Inception stuff on memory. But this is like inserting an object/person. So to do that, this has been a dream for many of us for a long time. Can we reliably disrupt performance by suppressing 1 millimeter bits of IT? So to do that, what we're doing is testing a large battery of tasks and a battery of suppression patterns. So not just sort of saying, can we affect face tasks or one task? But let's imagine we test a battery of tasks. And then, we-- and the idea where we'd have a whole bunch of tasks and we'd do every bit of IT one by one, and then in combination, and we'd sort of get all that data and figure out what's going on, right? That's sort of the dream, right? So we're trying to build towards that dream. Do you guys get it? Right. I mean, I don't know. And then we're motivated by this kind of idea here. So to build-- so we started-- I'm just going to give you a quick tour of we have tools to start to do this. You know, this is our recording, we can localize what we're recording two very fine grain using x-rays. So we know exactly where we're recording the IT to like about 300 micron resolution. So that's why I'm putting this slide up. And what we're interested in is going, if I silence this bit of IT, or that bit of IT, or that bit of IT, so actually do this experiment, what happens behaviorally? And Arash Afraz is a post-doc in the lab, started these actual experiments. And one of the things Arash did was to first say, let's see if we can get this silencing of optogenetics tool to work in our hands. And the reason we were so excited about that is because we think lesions, if we can make temporary brief silencing, that that will give it much more reliable disruption of behavior that then, if we started to try to inject signals, which would be our dream, but that seems too risky to us. We just want to say, what is a temporary lesion of each bit of IT do? And optogenetics is cool, because there's no other technique that can briefly silence-- temporarily silence activity. You can do pharmacological manipulations, but those last for hours. So this could briefly silence bits of IT. And that's why we were excited about it. We also did pharmacological manipulation as a reference to get started. But what we're doing is trying to silence 1 millimeter regions of IT using light delivered through optical fibers as the recording electrode. And to silence bits of neurons here. And so what Arash did was first show that you can actually silence neurons in this way. So if you guys haven't seen optogenetics plots, this is data from our lab. What's quite cool about this, again, is you have the same images are being presented. So this green line should be up here. But Arash turns a laser on right here, shines light on there. And there's some opsins expressed in the neurons in that local area. And you can see it just sort of shuts the thing down, and it sort of deletes or blocks this. You have the same input coming in. But you can sort of delete it here. And this is another example. These are some pretty strong examples. It's not always this strong. But this is, again, you can see we can return back to normal right away, right? So this is a 200 millisecond silencing. You could go even narrower than that. But so this is what we had done so far. And again, what we did was say, look. This is a risky tool. This is it not going to work at all. So Arash just wanted to test something that was likely to work. And so we picked a face task because there was a lot of evidence of spatial clustering of faces that you'll hear from Winrich and you also known in the literature. So what Arash did was to say, we picked a task of discriminating males from females. We put in our notion of invariance. It's not just do this image access. But you have to do it across a bunch of transformations. In this case, its identity as a transformation. So you're saying, all of these are supposed to be called male, and all these are called female. And he wanted you to distinguish this from this. That's what he trained a monkey to do. And just to give you the upshot, is that, we do all this work, we silence the bits of cortex. And here's the big take home. You get a 2% deficit of single one millimeter silencing of bits of IT cortex. Parts of IT cortex, not all of IT cortex, produce a 2% deficit. Here's the animal running at 80%, 6% correct. These are interleaved trials where we silence some local bit of IT. You get a 2% deficit. That's true only in the contralateral field, not that ipsilateral field, for the aficionados. You might look at this 2% and go, well, that's tiny. But we looked at it, this is exactly what's predicted by the models that we were talking about. It's right in the range of what should happen. And so this, to us, is really quite cool. This is highly significant. And now we sort of are in position to start to say, OK these tools work. They do what they're supposed to. And now we can start to expand that task space. So this result has been published recently, if you're interested in this. And here is one of the ways we're going forward is that Rish Rajaingham, the one doing those tasks in the monkey I showed you earlier. Silencing different parts of IT. This is now with muscimol, different bits of IT-- these are different tasks, lead to different patterns. That's what these dots are here-- different patterns of deficits. And if you go back to the same location, you get the same pattern of deficits. So this is only 10 tasks. But I think it hopefully gives you the spirit of what we're trying to do. And again, this is only muscimol, which doesn't have all the advantages of optogenetics. But this is what we're were building towards here. So I'm just giving you the sort of state of the art. So our aim is to measure the specific pattern of behavioral change induced by the suppression of each IT sub region, ideally testing many of them, and then compare with the model predictions. I'm saying there's this domain, and I want to sort of sample the whole domain. So far, I've given you only just samples of tasks in the domain. But we're really trying to define the domain. And I'm just-- I'm going to skip through this just to give you the punchline, is that we do a whole bunch of behavioral measurements. We presented this work before. It's like, this is now up to three million Mechanical Turk trials. It seems to us that we can embed all objects, even subordinate objects, of the type of task that I've been telling you, in roughly, in essentially a 20 dimensional space. So there's 20 dimensions. We think we infer that humans are projecting to about 20 dimensions to do these kind of, the tasks that we've shown here. Which is sort of smaller, but eerily close to that in the order of magnitude to that 100 or so features that I've been talking about. So that's where-- regardless of whether-- these are some of the dimensions and how we're projecting them. Again, I won't take you through this, because I think we've already used up enough time and I want to get on to this part. But we're trying to define a domain of all tasks where we can sort of predict what would happen across anything within that domain. And that raises questions of the dimensionality of that domain. And there were behavioral methods to do that. And we've been doing some work on that. So I'll just leave it at that. And if you guys have questions, we can talk about that some more. I want to sort of in the time I really have left is to talk about the encoding side of things, because I promised you guys I would get to this. Unless people have any more burning questions on this decoding side. So far I've been talking about the link between IT and perception. Now I'm going to switch gears and talk about this other side. Which is, so I talked about this. And that tells us that the mean rates in IT are something that seem to be highly predictive. I showed you at least one model that has the laws of RAD IT model. But now, it's like now, we can turn to the encoding side and say, we need to predict the mean rates of IT. And that should be our goal if we want to explain images to IT activity. So, these would be called predictive encoding mechanisms. So, now you guys have heard about deep convolutional networks. If not, you've heard about them already, you'll probably hear about them some more. So we started messing around in 2008. This is a model inspired-- I mentioned this family of models before. Hubel-Wiesel, Fukushima, and there's a whole HMAX family of models, that really was the inspiration of this larger-- this large family of models, that have this repeating structure that are now really the sort of modern day deep convolution networks really grew out of all of this earlier work. And so we started exploring the family in 2008. And just, this is a slide that you've already sort of seen a version of this from Gabriel where you know, for when you take an image, you pass it through a set of operators. So you have filters. So these are dot products over some restricted spatial restricted region, like receptive fields. You have a non linear area, like a threshold and a saturation. You have pooling operation. Then you have a normalization. So you have all these operations happen here. And that produces a stack. So think of like, if there are four filters here, like four orientations, you get four images, you have one image in, you have four images out. But if you had 10 of these, you'd get 10 of these out. Then you repeat this here, right? And so as you keep adding more filters, this stack just keeps getting bigger and bigger. And it keeps, because you're spatially pooling, it keeps getting narrower and narrower, right? So you go from this image to this sort of deep stack of features that has less retinatopy. It still has a little bit of retinotopy. And that, you can see, has been exactly a very good model why people liked it of how people think about the ventral stream. So these models typically have thousands of feat-- visual neurons or features at the top level. Just to give you a sense of scale of how they're run. And just to take you through, you know, I guess maybe you'll hear about this, if you haven't already. Each element has like, a filter, has a large fan in. Like these are like neuroscience related things. They have non-linearities, like thresholds of neurons. Each layer is convolutional, which means you apply the same filters across visual space. Which is like retinotopy, that is a view on cell that is oriented here. There'll be another view on cell that's in another spatial position, same orientation, different spatial position. That's what the convolutional models are just an implementation of that idea of copying the same filter type across the retina. And there's a deep stack of layers. These are all things that I think are commensurate with the ventral stream anatomy and physiology. So, but one of the key things that those who work with these models know is that, they have lots of unknown parameters that are not determined from the neurobiology. Even though the family of models is well described, what are the exact filter weights? What are the threshold parameters? How exactly do you pool? How do you normalize? There's lots of parameters when you build these things, essentially thousands of parameters, most of them hidden in the weight structure here. Which, if you think about, the first layer, that would be like, should I choose Gabor filters? Or should I do some other-- you know Haim was talking about random weights, right? So there's choices there. There are lots of parameters. So the upshot is, there's a big-- that's why I call it a family of models. And how do you choose which one is the right one, so to speak? Or is there a right one? Or maybe the whole family is wrong, right? These are the interesting discussions. So, what I like about it is, at least when you set it, it's a model. It makes predictions. And then you can test it. So it's at least a model. And it predicts the entire-- you know, if you start to map these, you say this is V1, this is V2, this is V4. It predicts the full neural population response to any image across these areas. So it's a strongly predictive model once built. So that's nice. But now you have to determine how am I going to build it? How do I set the parameters? So how do we do that? Well, there's lots of ways you could do it. And I'll tell you the way we chose to do it. Which was to just not use any neural data. It was just to use optimization methods to find specific models to set the parameters inside this model class. And we chose an optimization target. This is a little bit, again, inspired from a top down view of what the system's doing. What are the visual tasks that we suppose the ventral stream was supposed to solve? Which I already told you, we think it's invariant object recognition. That's what makes the problem hard. So we tried to optimize models to solve that. And essentially when we're doing that, we're kind of doing the same thing that computer vision is trying to do, except we're doing it in our own domain of images and tasks that we set up. But we essentially, there's a meeting between computer vision and what we were trying to do here. And when I say we, this is work by Dan Yamins, a post-doc in the lab, and Ha Hong, a graduate student. And what we did was to just try to simulate again, as I did earlier. We took these simple 3-D objects. We could render them, just as before, place them on naturalistic background. And then we just built models that would try to discriminate bodies from buildings from flowers from guns. So they would have good feature sets that would discriminate between these things. And these were essentially trained by various forms of supervision. Now there's lots of ways you can train these models. I could tell you about how we did it and how others have done it. I think those details are beyond what I want to talk about today. But just, it's a supervised class that's probably not learned in the same way that the brain has learned. Most people don't think so. But the interesting thing is the end state of these models might look very much like the current adult state of the brain. And that's what I want to try to tell you next. So first, let me show you that when we built these models, this was in 2012. We had a particular optimization approach that we called HMO that was trying to solve these kind of problems that I showed you earlier on these kind of images. And I showed you IT was pretty good with humans. I showed you its performance was almost up to humans, even with just 168 samples. And when we first built a model here, we were able to do much better than some of our previous models that-- on these same kind of tasks. So I told you we constructed, because we knew it made these things-- we made these models not do so well. So we built these high invariance tasks to push these models down. And then we had space to build a model that we could do better on. And we called it HMO 1.0. And then we started to say, now we have this model that has been optimized for performance. Let's see how well it does on comparing with neurons. Let's see if its internals look like the neural data. So here's the model we built, HMO 1.0. It's a deep convolutional network. It has two different levels. It had four levels. It had a bunch of parameters that we set by optimization, that I'm just telling you kind of what we optimized. I didn't tell you-- I'm not telling you any of the parameters. And now, we come back to say, well look. We can show the same images to the model that we showed to the neurons. And then we can compare how well these populations look like that population, or this population looks like that. And so what we did was, we asked how well can layer four predict IT first? That was the first thing we wanted to do, take the top layer of this model, the last layer before the linear readout of this model. And to do that, you might sort of say, well, wait a minute. The model doesn't have mappings. It has sort of neurons simulated here, neuron 12 or something. And there's some neuron we recorded. But there's no linkage between that neuron and that neuron, right? You have to make that map. So what we do is we take each IT neuron and treat this as sort of a generative space. You can generate as many simulated IT neurons as you want. You would just ask, let's take this neuron, take some of its data, and try to build a linear regression to this neuron. Treat this as a basis to explain that neuron. And then test the predictive power on the held out IT data. And that's what I'm writing here. That's cross-validation linear regression. So I'm going to show you predictions on held out data where some of the data were used to make the mapping. And there's lots of ways we chose-- we could make the mapping. And we did essentially all of them. And I could talk about that if you want. But that's this central idea. Take some of your data, say, is this in the linear space spanned by this basis set? So I can I fit that well with this linear basis here? As a linear map from this basis? And here's what we actually-- here's what it looks like. Here's the IT neural response of one simulated-- one actual IT neuron in black. This is not time. These are images. I think there's like 1,600 images here. So each black going up and down, you can barely see, is the response, the mean response, to different images. And you see we grouped them by categories, just so, just to help you kind of understand the data. Otherwise, it'd just be a big mess. Because IT neurons do-- you can kind of see they have a bit of category selectivity. And again, this was known. This neuron seems to like chair images, but not all chair images. It sometimes likes boats and some planes a little bit. And the red line is the prediction of the model, once fit to part of the-- to this neuron. This is the prediction on the held out data for the neuron. You can see the R squared is 0.48. So half the explainable response variance is explained by this model. And again, these are predictions. The images were never seen-- the objects even were never seen by this model before it makes these predictions here. So this is just saying that the IT neurons live in this space. It's actually quite well captured by the top level, in this case, of this first HMO model we built. I'll show you some other models in a minute. Here's another neuron that you might call a face neuron because it tends to like faces over other categories. So it might-- it would pass the test of the operational definition of a face neuron. This model, this neuron was well predicted, again, by both its preferred and non-preferred face images by this HMO model. Again, a slightly-- an R squared near 0.5. Here's a neuron that you would look at the category structure. And you don't even-- you can't really see the categories here. They're still here. But you don't see these sort of blocks. You just see there's sort of some images it likes and some it doesn't. It's hard to even know what's driving this neuron. But it's actually quite well predicted, I think. You don't have the R squared. But it's similar. It's about half the explainable variance. Just another example. And here is a sort of summary here. If you take-- this is a distribution of the explainable variance for the top level of the model fitting about, I think this is 168 IT sites. Some sites are fit really well, near 100%. Some are fit not as well. The average is about 50%, which is shown here. So this is the median of that distribution here. So the summary take home is about 50% of singularly response variance predicted. And this is a big improvement over previous models I'll show you in a minute. The other levels of the model don't predict nearly well. So the first level doesn't predict well. Second level better, third level better, the fourth level the best. If you take other models-- these are some of the models I showed you earlier-- they don't fit nearly as well. Here's their distributions and here's their average, their median explained variance. And just to fix-- to just fix ideas, you might think, well look, we built a model that's a good categorizer. So of course it fits IT neurons well. Because IT neurons are categorizers. Well, here's a model that actually has explicit knowledge of the category. It's not an image computable model, and it's not an easy one. But it's just given that sort of an oracle that's given the category, and how well it explains IT. And you can see, it explains IT much worse than the actual model. So this implies a model is limited by the real-- the architecture puts constraints on the model and how it adds variance that the sustained IT neurons are categories does not easily capture. So that kind of-- that sort of inspired us to say, OK. What about if we go down and say not just IT, but let's go to V4. Because we had a bunch of V4 data. And so we play the same game in V4. Let's take level three and see if we can predict V4. And here's the IT data I just showed you a minute ago. And here's the V4 data. So the V4 neurons are highly predicted in the middle layer. Layer three is the best predictor of V4. The top layer is actually not so predictive, less predictive of V4 neurons than the middle layers. And the first layer is not so well predictive. And again, the other models are actually, now you can see they're getting on relatively better. You can think of them as sort of lower level models. And they're getting better, which is what you'd expect. But interestingly, this is really exciting to us. Because look, this model was not optimized to fit any neural data other than that last mapping step. All it is is a bio inspired algorithm class, which is the neuroscience sort of view of the feed-forward class of the field. And tasks that we and others hypothesize are important, that the ventral stream might be optimized to solve, and an actual optimization procedure that we applied. And that leads to neural like encoding functions at the top and in the middle layer. So you don't-- so this sort of leads to funny things like saying, what does V4 do? The answer here would be, well, it's an intermediate layer in a network built to optimize these things. That's the way to describe what V4 does, according to this kind of modeling approach. Now I want to point out, this is only half of the explainable variance. So it's far from perfect. There's room to improve here. But it's really dramatic how much improvement we got out of these kind of models. And so if you take this sort of-- well, I'll skip this. If you take this back to you know, big picture, what did we do here? What we're doing is we have performance of a model on high end variance recognition tasks. We're saying, this is what we've been trying to optimize. And what we noticed is that if you plot-- these dots are samples out of that model family. These black dots are other models I showed you. So they're control models that were in the field at the time. And this is the ability of the top-- the model-- the top level of any of the models to predict IT responses. So, you know, how good they are predicting-- this is sort of the median variance explained of single IT responses. And you see there's a correlation here. If you're better at this, you're better at predicting that. And all we did was optimize this way, which we think of as like, evolution or development. So we're not fitting neural data. We're just optimizing for task performance. And that led in 2012 to a model that I just showed you, explained about half of the IT response variance. OK, so it's like, well, this looks like it's continuing up this way. OK so if you believe that story, then, that says, if we can optimize further on these kind of tasks, maybe we can explain more variance. And it turned out, we didn't actually need to do that, because again, I said, computer vision was already working on this. And they got a lot more resources. They're already doing it. They're already better than us on this. So here's our HMO model. This is now Charles Cadieu, a post-doc in the lab. These were models that came out at the time. This is Krizhevski et al. supervision. It's ICLR 2013. They were better than the model that we had built. You know, we were in this restricted image domain, you know, there's lots of reasons why we could say they're better. Regardless, they were better at our own tasks than the models that we had built, right? So they were already ahead of us on the task that we had designed. And so they were up here, and then they were up here. And so, if you follow that prediction, that means these models might be better predictors of our neural data, right? These guys don't have our neural data. All they're doing is building models to optimize performance on tasks. And but we could take their features from the neural data, play the same game. And we actually explained our response-- data better than our model explained our own data. So this is a nice statement that is not even in our own lab. Just a continued optimization for those kinds of tasks leads to features that are good predictors of the IT responses. And that's what's shown here. So I think that's what I just said there. So, Charles took this further and analyzed this in more detail. This is a summary of what I presented in the second half now, showing that IT firing rates are feature based, learned object judgments naturally predict human monkey performance. This is why the laws of RAD IT. I picked a particular model, which is 100 millisecond read on this time window, 50,000 neurons. 100 training examples. That's one particular choice of a decode model, that's just a-- is a current set of decode model that fits a lot of our data, but not all of our data. And we also want to get finer grain data. The inference is, this might be the specific neural code and decoding mechanism that the brain uses to support these tasks. That's what we'd like to think. But now, we're trying to do systematic causal tests. And we talked a lot about trying to silence bits of IT as one example of that. And the tools are still not where we'd like them to be. But you see we're making progress there. So the second was I showed the optimization of deep CNN models for invariant object recognition tasks led to dramatic improvements in our ability to predict IT and V4 responses. I showed you our model HMO. But then the convolutional neural networks in the field have already surpassed our predictive ability on our own data. And so the inference is that these encoding mechanisms in these models might be similar to those that work in the ventral stream. And now, you know, there's a whole sort of area where you can start to think about doing physiology on the models, so to speak. And that problem's almost as hard as doing physiology except on the animal, except that you can gain a lot more data. And so, and this is allowing the field to design experiments to explore what remains, what's unique and powerful about primate object perception. So within core object recognition or perhaps having to extend out of that, I think is now what people are trying to do. So big picture in terms of us for the future, I've talked about this law's of RAD IT. Can we perturb here and get effects here that are predictable? Can we predict for each image, coding model, and for the optical manipulations? We talked about that. Dynamics and feedback are something that we're interested in. But I haven't talked much at all about. I think that's a good point, a discussion topic. I can tell you how we're thinking about it. We have some efforts in that regard. I talked on the encoding side about these kind of deep convolutional networks that map from images. But the dash lines mean they're only 50% predicted. Both of these cases, they're not perfect, right? So there's work to be done there. And one of the really exciting things is here is how these models learn. This supervised way of learning these models is almost surely not what's going on in the brain. So finding more-- less supervised, biologically motivated learning of these models is a good-- is the next step, I think, for much of the field. But what's nice is to have an end state that is much better than any previous end state we'd had before. So that sets a target of what success might look like. And you know, maybe we can think about expanding beyond core recognition. We can talk in the question period about that. When is the right time to kind of keep working within the domain of core recognition that is set up, versus expanding beyond that? Because there's lots of aspects of object recognition that I didn't touch on here. And that comes up in the questions. I think, there's lots of work to be done within the domain, but there's also interesting directions that extend outside of that domain. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_15_Winrich_Freiwald_Primates_Faces_Intelligence.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. WINRICH FREIWALD: So my talk is going to be mostly about faces. And in many ways I'm going to connect to what Jim DiCarlo was talking about today and what Nancy talked about today. I just thought preparing for this that I should say a few things about primates and intelligence and how face recognition might be connected both to the species we are studying and to the overall question of intelligence. And I thought the most appropriate thing to do this at MBL would be to start with this kind of creature. So you might have seen in the paper Nature that came out last week-- that the genome of the octopus was sequenced. It was really heralded in the public press that it's finally proving their intelligence. The argument is that there are 33,000 genes, 10,000 more than humans. That's of course not a very strong argument. There are plants with 45,000 genes. So that really doesn't tell you very much about intelligence. But amongst those genes that were found were lots of genes that are important for development of the brain. And there is very high heterogeneity in certain gene families that are controlling the development of the nervous system. So it's one example of how a better understanding the intelligence of other creatures in neural terms, even in creatures like the octopus which is very difficult to study. The point I would like to stress about the octopus is there's really nothing social about the species. Actually almost simultaneously with this paper, there was one report about sexual reproduction in one particular species of octopus where it's not clear if it's violence or more affection. But this is really the one exception to otherwise a life that's pretty much non-social at all. So in the octopus you have the egg stage and then this larva which are hatching very early on. There's no interaction with mom or anything like this. They go to the surface of the ocean and they start feeding and try to grow as fast as they can because only a few of them are going to survive. Then reproduction is a very scary enterprise. In many octopus, the male has to be careful not to be eaten by the female in the process. When it's successful, then actually the male stops eating. He's going to die even before most of the youngsters are going to hatch. And all that mom is going to do is basically to make sure that fresh water is going to be delivered to the eggs. And that's it in terms of social life. So you can be very intelligent like an octopus and, who knows, Gabriel just mentioned extraterrestrial life. Maybe other species out there might also not be that social. And so there doesn't necessarily have to be a connection between sociality and intelligence. Second thing is it can be very social, get a warm fuzzy feeling of being with others and you get lots of protection being in the group, but following your instincts in this way doesn't necessarily make you very smart. So I don't want to argue against the connection between sociality and intelligence in the form of social intelligence. But we have to be careful there's no necessary connection between those. However for primates there is this idea, this social intelligence hypothesis that really what made primates so intelligent is their sociality. And so let's consider a little bit what the arguments are. It was most strongly put forward by Nick Humphrey in '76, but there are similar precursors to this idea. So who are these primates? The primates are a small group of mammals with about 400 plus species. They're very diverse. You can have very small primates, just 30 grams, all the way to 200 kilogram animals. They evolved, studying 65, 85 million years ago, there was this mass extinction. And so you have this high diversity within the mammals starting from this point on. So they have certain things in common with other mammals, but they're also very special in many ways. All of the primate species are social. They're not [INAUDIBLE] social, but they are social. They develop very slowly. So it's really very different from the octopus. A lot of investment is made into the offspring There are not very many offspring. And the lifespan of these animals is pretty long. The octopus life span is three to five years. They're very visual, rather than other mammals, which are very olfactory oriented. They have binocular vision, many have color vision, so vision is very important for primates. And they have on average larger brains than other mammals have. So our understanding of the anatomy of primates and what makes this special is actually very rudimentary. But there are a few points that should be of interest. So if you look at the mammalian brains, obviously they're very, very complex mammalian brains. They're not primate [INAUDIBLE] ones. And the main factor really is body mass. So the bigger an animal is, this is elementary of body weight versus brain weight. And you can see there's a log-log relationship between the two. That's something that you find all over the animal kingdom. But if you compare primates to other non-primate mammals you can see that there is a larger increase with body mass of brain mass. And if you count the number of neurons, the number of brain neurons, it is increasing more steeply with body mass than it is in other mammals. This is obviously a very crude measurement. There are others. If you look for brains roughly the same weight, you can again see like in primates, you have many more neurons than in non-primates just by these two examples here. So there seems to be something different in the organization. There are other measures you can look at. So for example, the neuron size is increasing with brain size in rodents. So a larger brain in rodents is not necessarily one that has many more neurons, but these neurons are just getting bigger. And in primates this is really not so much the case. So the size of the neuron pretty much stays constant even if you decrease the size of the brain. Or how much white matter do you actually need per neuron to connect other brain regions? In rodents, apparently the fiber caliber is increasing with brain size. So again, your brain might grow just because the anatomy of the basic element requires that it will need more space. But in primates that's not the case. If you get more white brain matter, that's likely because the connectivity is more complex. Primate brains also fold faster with increasing size than rodent neurons. So these are all just very coarse indications that maybe there's something special about the primate brain compared to other mammalian brains. So along the anatomy, I mentioned that primates have forward-facing eyes. They can do binocular vision, they have color vision. They have skulls with a large cranium, that's something that makes them special from other mammals. They're also special in other ways that are important. If you think about embodied cognition, you don't have to buy into all these points, but obviously if you have a hand as complex as ours which we share with many primate species, there are lots of things you can do. And that requires you to be able to control it. And this gives you a power to interact with the environment that other animals might not have. So the shoulder is more mobile, there's an opposable thumb in many species. And then in the face, there are changes that you might already have seen here. So if you go up to the more complex animals, the snout region is becoming increasingly reduced. And I will tell you later why this might be important. So these are anatomical specialization in primates. Sociality is very important as well. And so there are four main organizational principles of sociality in primates. The dominating one is the second one here. It's called the male transfer system. So it's a polygamous, multi-male organization. This is very important for the social life of primates because what it means is that the social behavior has to be complex. So there can be cooperation, like grooming, defense, and hunting which all the animals of a troop might engage in. But at the same time there's competition for food, mates dominance, hierarchies. And it's a function of the complexity of the social environment. So primate social life was beautifully inscribed by Dorothy Cheney and Robert Seyfarth in this wonderful book Baboon Metaphysics. And I'm just going to quote from that. So they studied baboon monkeys in the wild and here's what they have to say about this. "The domain of expertise for baboons, and indeed for all monkeys and apes, is social life. Most baboons live in multi-male and multi-female groups that typically include eight or nine metrolineal families." Which means that the females stay in the group and they found families and they're going to stay constant over a long period of time. "They have a linear dominance hierarchy of males the changes often and the linear hierarchy of females and the offsprings that can be stable for generations. Daily life in a baboon group includes small scale alliances that may involve only three individuals and occasional large scale familiar battles that involve all of the members of three or four metrolines. Males and females can form short term bonds that lead to reproduction, or longer term friendships that lead to cooperative child rearing." "The result of all this social intrigue is a kind of Jane Austen melodrama in which each individual must predict the behavior of others and form those relationships that return the greatest benefits. These are the problems that the baboon mind must solve and this is the environment in which it has evolved." Most of the problems facing baboons can be expressed in two words: other baboons. And so this is really important. So again, if you're social you don't necessarily have to be very smart. You can be very smart and not be social. But there's something special apparently about primates that links our intelligence to our sociality. So again, the Social Intelligence Hypothesis, what works in its favor is that the primates have large brains. And primates, apparently group size is correlated with brain size across different species, and not the home-range size. This is an alternative hypothesis is that maybe you have to forage in more complex environments. And so a similar proxy for the complexity of your social life and your physical life predicts there's a better correlation of brain size with the social life than physical life. The complexity of an individual's social relationships increases exponentially with group size. And groups are not small. And we're going to get back to this point in a little bit. The baboons and other primates know their peer's dominance, rank, and social relations. Everything that I mentioned in the previous slide is good evidence for behavioral work that actually primates know something about it. This social knowledge contrasts with surprising cases of ignorance outside of the social domain. Even when it's something as important as a predator. So Cheney and Seyfard, they also studied vervet monkeys. And what they observed is that vervet monkeys, one of the main predators to them is a python. And the python, if it crawls in the sand it leaves behind a trail. And there is absolutely no indication that the vervet monkeys make the connection between these trails and the presence of the python. Which is really striking, you would imagine this is the first thing that they would have to learn. So there are cases where they actually follow the trail into the bush and they're very surprised to find a python there. Another example is, are leopards. So in the environment, leopards are of course predators who also feed on the vervet monkeys. Leopards have a way of putting the carcasses of animals they hunted down into the trees to protect them from larger predators like lions. So the presence of a carcass would actually indicate to you that likely there is a leopard around. And again, the vervet monkeys they don't make that connection. So if there is a carcas there, they're not particularly scared about it. It doesn't mean that they're ignorant. So they're even following alarm calls from other species as to whether an eagle is approaching, or if cattle is approaching. So it's not that they're generally dumb in this way, but there's really this very big contrast about all the details that they know about their social world and the obliviance that they can express for non-social factors. Then we have specializations, we're going to talk about this in the brain for processing social stimuli. And then there's actually evidence that females who have better social abilities, that they are less stressed and they have better reproductive success. And so this all works to say that if you are socially smart in a baboon environment or in the environment of many primate species, you actually have better success of reproducing. Therefore there's a good argument to be made that your social intelligence will therefore go to the next generation. This is how social intelligence might improve. I think there's one important point that's oftentimes not made. And that's if you become smarter and smarter in interacting with your physical environment, your physical environment does not change very much. But if you are interacting with a social environment, and getting smarter and smarter interacting with the social environment, the elements in your social environment you're interacting with, they're also getting smarter and smarter. So you're actually setting forth an arms race where you're not only improving the situation by getting smarter, but we have to get smart in order to keep pace with the others who are outsmarting you. And so you can see how there could be a connection between sociality and intelligence. That there's really this arms race does not occur for physical interactions but for social interactions. So that you actually have to be able to better predict the next move of someone else in your group and you have to know something about that individual for you to be successful. So there are arguments against the Social Intelligence Hypothesis. So, in particular, we're ignorant about many other species. So there are other social species, hyenas for example have complex societies, but hyenas have not been studied as much as monkeys have. We also don't know about the complexity, mostly for this reason, where they're really compared to whales, whales for example, primate societies are more complex. And this of course would be a crucial conjecture of this Social Intelligence Hypothesis. Then within the primate orders, there are actually some other correlations that do predict brain size very well. And so within the primate order, social learning, innovation, and tool use are strongly correlated with brain size and not with group size. So you could imagine a scenario where actually the evolution of basic, social intelligence is something that's very basic to primates. But then if you go to different species within the primate order and ask like why did they become so smart, like orangutans or chimps who can use tools, then it really might be the tool use that would be of more importance than the sociality. So I have a movie here that actually illustrates these two different hypotheses. So you see social interactions here in a group of Tonkin macaque monkeys. You can see the facial displays, you can see that they are tending to each other. And well you might think that they are trying to figure out what's actually going on here. And here's the alternative hypothesis of this. This is tool use. So you can see this guy just invented a cool tool, a nose pick. And so it's anyone's guess what's more important to your intelligence, you been able to read the social significance of other individuals in your troop or your ability to invent a nice nose pick. So the last point I wanted to make is a question. So are the primates' abilities in social knowledge really intelligent or is it just more like idiot savant-like abilities? So a unique specialization where they're good at. The argument that Cheney and Seyfard made is the following. So the knowledge that they have should actually be true knowledge and just not learned associations. And the reason has to do with the complexity of the social environment. If you have 80 different individuals, which is the typical case for these baboon monkeys, you have 3,160 pairs of animals and 82,160 trails. It's going to be virtually impossible for you to learn all these different pairwise relationships and then behave intelligently based upon it. Second, these relationships can change very fast. So it would not be very smart to try and make a list of all these pairwise interactions and then act upon that. No single behavioral metric seems to be necessary or sufficient to recognize associations like matrilinear kin. So human observers are apparently not very good to predict this if they don't know the animals very well personally. Then you might think, well, maybe it's not physically like a list but you don't really learn this much and you apply a simple rule to it. And that also doesn't seem to work very well because social relationships like friendships, they are intransitive. So if A and B are friends, B and C are friends, that doesn't mean that A and C necessarily have to be friends. Others like family relationships are complex, they're non-associative. So if A is the mother of B, that actually means that B is not the mother of A. So there's a more complex interaction there as well. And then finally, there can be simultaneous membership in multiple classes. And again, for you to be able to keep track of this, you better have a cognitive model of what's going on rather than just a list of associations that you learned. It's very difficult in experimentation to prove it's not association, but I think these are very good arguments to consider that actually these primates have active knowledge of their social environment. So there's one example for this where you can actually make the point very nicely. And this is the story of Ahla. Ahla is a baboon monkey and she was actually living with farmers in South West Africa. So there was a habit at the time to actually replace dogs who were herding goats with baboons. And so you can see Ahla sitting here. You can see her here adopting some of the behavior of the goats. So she's licking salt here, which is something that baboons would naturally not do. She would continue to engage in social behaviors that are typical for baboons. So she would groom the goats, for example. But the most amazing thing about her, which is a little hard to see as you see her here, she's carrying one of the little yearlings here, and brings it to its mother. And the description of what this animal was doing was when the goats were brought home and then sometimes they were separating the mother goats from the offspring, she would actually go manic and then try to pair them up and she would not stop until she was finished. And this would even happen as multiple goats were calling for the yearlings and then vise versa. And so she would have to put order into the social world that she was living in and she wouldn't stop until this order was restored. And the farmer said that they were not able to tell any of the adult goals or the yearlings or know any of the pairs. But this was like her world. This was like a social environment that she was in and she would structure it according to her cognitive demands. So the point is that primates have intricate social knowledge. So they know about the status of individuals, like their age or their gender. They know about the interactions of individuals. They recognize them very simply like grooming and mothering. And then based on these observations of the different individuals in the social world, they built these cognitive structures like friendship, kinship, and hierarchy, have an interesting, complicated structure to them. All of this is rooted in the concept of the person. And this is very important and as I'm going to be talking about face recognition, I have to emphasize these are two different things. You can recognize a person from their face, but if you can recognize a face it doesn't mean that you know who it is, who is behind the face. So the person concept would include something like this. It's a juvenile, female monkey. It's the daughter of x and so on and so forth. So this is knowledge that we have. It's actually been shown in rhesus monkeys, the monkeys that we work with, that they actually have this person knowledge. So why do we study faces? So for us faces really are the ideal intersection between object recognition, the study of which Jim DiCarlo talked about, and social cognition. So as Jim was alluding to yesterday, vision is really important in primates. About a third of the primate brain is thought to be involved in visual function. So this is a lot. And this is testament to the fact that there's a lot of information to be gathered from the outside visual world, but also that it's difficult computationally to gather this. And so Jim was explaining some of the computational challenges that object recognition has to solve yesterday and then we'll come back to some of his points a little later. So what is an object? So Jim was saying that it's the basic unit of cognition. And just very quickly, what it actually is it's more than a collection of features. So the Gestalt Rules of Perception actually emphasize this. So if you have proximity of elements, you group them together. If these elements share similarity, you group them together to larger entities. If there's good continuation, you group these local elements together into lines. If that's common fate, you group them again together to larger entities. And something similar is true for faces. If you have different face parts, then the wrong organization but now you put them together correctly, you can suddenly recognize a face. So there's a larger scale organization to not that goes beyond just being a collection of features. And maybe something similar is going on for a higher order condition that I'm sure other people are going to talk about. And it's in the physical interactions, we infer causality from just the sequence of events. Or social interactions, like in Heider-Simmel movies where you are telling yourself a complex social story unfold even when there's just simple, geometric shapes moving around. So this creation of higher order representations I think is essential for object recognition. It's a constructive process that the brain imposes on the piece of information it gets from the eyes. It's not just a collection of features. It's kind of the basis of symbolic representations. It can create meaning, especially if you think about the face, if it's the face of someone you know that's very meaningful. And it makes information actionable. And these are really, I think, the import links between object recognition and social cognition, and faces are smack in the middle of this. So I already showed this movie. To use again I emphasize the social communication that's taking place here. You can see the facial displays here. So the older male who's chasing the younger animal is making these facial displays. By the way, most of you will never have seen a Tonkin macaque and still you cannot understand what's going on there. This is something very special, again that you don't have in all animals, these facial displays. And Charles Darwin was actually again one of the first people to notice it in 1872 is that you use your face to express your emotional state. Otherwise your emotions are private to you but you use body language and then facial language to suppress your emotions. And oftentimes you do it even if you don't want to. It just happens automatically. And that's not possible in all animals. So if you are a fish or a frog, there are lots of really cool things you can do. You can sit on the front porch and enjoy the day. So lots of things that you can have in common with primates. But facial communication really requires something more that's very mammalian specific. In mammals you actually have in the face musculature that it's not attaching from bone to bone, but it's now attaching to the skin. And so if you look at these two rats here where their whiskers, I hope you can see it here, in the end are labeled. You can see they're actively exploring each other's faces in a somewhat sensory fashion. That's possible because they can move their whiskers because of this musculature. And that's a specialization that's becoming more and more refined in primates. So in rhesus monkeys and chimps and humans, we have 23 different facial muscles. They're becoming more and more flexible. I mentioned before that the snout region is increasingly reduced in primates. So you have some simpler primates where there's still a strong snout, which is limiting the ability of the face to move. But the more complex the primate is getting, the more flexible these muscles become and the more expressive the faces become. So the face now becomes richer and richer with social signals that can read out. And in rhesus monkeys which are shown here, you have a fixed set of facial expressions that, again, for a system that can analyze these, it's very important information about the emotional state of another animal. Primates are also very interested in faces. I'd very much like to show this movie which is showing a three-day-old macaque monkey. And I'll tell you what the point of this study is. So you can see that he's attending very closely to the face of the experimenter. Of course, if there were bananas you might think he would also be, this isn't proof that there is this specialization for faces that Nancy was alluding to before, but it's at least intuitive. The second thing why I like to show this movie is exactly what happened right now. You're all getting really excited about this absolutely adorable, little critter, right? And I've seen this movie now hundreds of times and it's still like this. It's still very emotionally charged. So here's the third reason. The experiment is based on facial movements here. You can see it's getting really excited about it, it's getting very active. And now he's reproducing these facial movements as best as he can. There's a specific facial interaction that's happening in human babies, I think for three months. It's happening in these rhesus monkeys for two weeks. And you can see that there is an intricate connection between what they're perceiving and what they're acting on in an automatic fashion. But this emotional part I think is really important. It's just at a certain point that you can't control these things. Faces really get very deep into your emotional and social brain automatically. And so one of the lines of research in my lab is to try to figure out the circuits that make that possible and then to use this to get an inroad into the social brain of function beyond face perception. So amongst the signals the faces are sending, you recognize Charles Darwin, so it's identity, they're getting their social communication, there's emotional responses, and there's also face following. So the direction that the eyes are looking to, we are following automatically. We can control this later, but this initial automatic response. Here's one very nice illustration from the British TV show. You have people wearing these glasses, actually it's a large background to people wearing these glasses where their eyes are drawn on these glasses that are going to one direction. And so you know that these are not real eyes, but what's happening is your attention is drawn constantly to this upper right region. And it's getting annoying over time because you know that there's nothing there. You know that they're not really paying attention there. But automatically your attention is drawn there and then you're going back again and your attention is going out there again. And so this is another thing that comes from the face that gets deep into your attentional control system. So social perception can start with faces, but faces are the most important visual sources of information. We get gender and age, of course identity, and things like perceived trustworthiness or attractiveness from just a very brief look at the face. And then there are these dynamical signals, like mood and overt direction of attention that we also get from the face. So how does this all work? So Jim was already explaining some of the challenges of object recognition to you. And so here are some of the challenges. So first of all, the social scene like this one here, lighting conditions can sometimes be non-optimal. And so the first thing for you to analyze the facial signals which are in this scene, is to localize where the faces are. And I'm going to tell you a little bit about what we understand about the mechanisms of that. Then once you know where the faces are, you want to analyze them further, you want to know who these individuals are. And I just realized that the images that I had from this, which are of course also taken from The Godfather, might not be the best. Where is the other picture of this individual here in this display of these five faces? Upper right. And then there's another individual, there's Don Corleone and there's another person down here with two different directions. And then if the lights were down a little bit more, you could see this better. The cool thing is that we have a way of relating these two pictures to each other knowing that they are from the same person, even though physically on a pixel by pixel basis these two actually are much more similar to each other. And so we'd like to figure out how the brain is doing that, achieving object recognition, in this case face recognition, in a manner that's invariant to transformations that are not intrinsic to the object. This is just a reminder that face recognition actually is very difficult. So this is of course just made up from Curb Your Enthusiasm, but there's a condition that many of you will have heard about, prosopagnosia. And to a prosopagnotic person who is face blind, the social world might look like this. So a prosopagnotic has great difficulty telling one individual from another. This is at least the most typical condition. And you can imagine that your social life would be really difficult and your enthusiasm about socially interacting would really be curbed if all the individuals looked at this and looked all the same. So there must be something about the new mechanisms that's very precise. So what's the neural basis of face recognition? So the story really starts with Charles Gross many years back in the late '60s, early '70s. He was recording from the inferotemporal cortex. He was showing pictures of monkey faces, other social stimuli like the monkey hand, he would scramble the face, and then look at the responses of cells. And he was the first to find a face selective neuron. Here's one. So these vertical lines have the action potentials the cell is firing. This is the period of time his face was shown. And this is the period of time the control object, the hand, was shown. And you can see the cells responding selectively to the face and not to the head. So this was a very nice finding. It actually took him some time to convince himself that he could publish it because he thought people would not believe it. It's recording a anaesthetised animal. But luckily, he did publish it many, many years later. And so this is the first evidence that there is a specialization in the brain for faces. He found many other cells that like other things and then faces and I think people thought that they were intermingled with other objects. So this was the view. That this is the side view of the macaque brain. This is the superior temporal circuits, the one big circuit in the monkey brain. And all these symbols here indicating positions where people found face selective neurons. The thought was they intermingled with object recognition hardware. It's basically the view that there is a big IT cortex where everything in object recognition can happen. And yes, of course you would have some cells that are face selective, you would have other cells that are non-face selective. And the mixture and the complex pattern of activity really is what gives you the identity of the object. Then Nancy used fMRI to discover face selective areas. So first, these are views from face areas. We now know multiple face areas that she was talking about before. So here are different slices to this. And the thought from these images really was that, no, that maybe within this large expanse of object recognition hardware there might be very specialized regions that are really there selectively to process faces. And so you give the FFA, and so the question you would ask is, is this really a region that's devoted to face processing and face processing only? Are these regions really face processing? Modules devoted to face processing or is it just the tip of the iceberg based on your statistical analysis that this region just looks a little bit more face selective than the neighboring regions? And second, do monkeys also have these localized face areas like humans? And you've got the answer already. Yes, they do. And then what is the distribution of cells within these regions versus outside? So this is really the research Doris Tsao and I engaged on many years ago. We used fMRI on macaque monkeys, same technology as in humans, slightly different coils. And this is the picture that we got. Very consistently across different animals. Here in the temporal lobe, you have six, face selective regions that you find that anatomically specific regions there's some variation from one individual to the other. But with the exception of the most posterior area, you actually find all these areas in all individuals on both hemispheres. There are also three areas in the prefrontal cortex which are a little harder to find. But the one in orbitofrontal cortex is actually is as reproducible as the one in the temporal lobe. So, yes there are. Monkeys have localized face areas like humans. And as Nancy was alluding to, we actually have quite a bit of evidence by now that these systems might be homologous. Very, very difficult to prove that they are homologous. But all the evidence we have so far is really pointing in this direction. So how selective are these face patches? And what Doris and I did was to lower recording electrodes into these face areas and record from cells inside this fMRI identified areas. And I'm going to show you a movie of one of the first cells we recorded from one of these regions. So it's actually a video we took from a control monitor. So it shows the same thing the monkey shows. The quality is not great because it's an actual video camera we took to take this image. In addition to what the animal saw, you will also see this black square which is indicating where the animal looked on the screen but the animal did not see this. And you're going to hear clicks if everything works fine when the actual potential is fired. Anyway, here's the quantification. So with 96 different stimuli in this image set, 16 faces and 18 non-face stimuli, this is the average response which is normalized between minus 1 and 1. And you can see that the biggest response of this particular cell actually of course to the 16 faces and not to any of the control objects. You can see though that there are some stimuli here in the gadget category, for example, that are eliciting responses that are quite respectable relative to the faces. But really the biggest responses recorded were to the faces. So then we color coded this so you have a response vector of the sale, where red is now symbolizing response enhancement and blue is symbolizing response suppression below baseline. And the advantage of using this format is you can now stick all the responses you get from all the cells that you're recording day after day after day from this one face area. And you get a population response matrix. And the way that this works is cell numbers are organized from top to bottom, page number from left to right. And you can see very quickly that most of the cells here are either selectively enhanced or selectively suppressed by faces. There's a small group here between, something like 10% of the cells, where it's not so clear what they are doing. But if you do the population average, you can see much bigger responses to all the faces rather than on face objects. If you look more closely, what these pictures are eliciting in these intermediate responses, these are like clock faces, apples, pears, there are things that have physical properties in common with faces. So you can kind of fool the system to give a partial response. And this is one clue to what this area might be doing. It should be doing a visual analysis of the incoming stimuli to try and figure out if these are faces are not. So these are cells in the middle face patches. I was actually going over this pretty fast but I'm going to use this later quite a bit. And so let's wind back. We have one posterior area here, to middle face areas, see the middle face patches, and then three anterior ones. I'm mostly going to talk about this one here, AL, and this one here, M, in addition to the middle face patches. So we think that actually this is another automatic face recognition feat. We can't stop feeling sorry for these peppers. They've been just cut in half. And so they seem to be screaming, and then you know they are OK but still you feel like something really bad just happened. And so we can't stop having these inferences about peppers where they look like faces. And one reason could be that we have this specialized circuitry that's just getting active with right features, even if you know these are not faces. OK so when the faces were discovered by Charles Gross, this really fell on very fertile grounds. And I should just discuss some of the implications. So David Hubel and Torsten Weisel just discovered a few years before orientation selectivity. So it was a big jump from early processing where I could see how selectivity of cells was getting more complex. More concentric representations to elongated ones, from simple cells to complex cells, but complex cells are as selective as simple cells but don't really have a special location. All the way up to the opposite end of the visual system and now you find a face selective neuron. Jerome Lettvin just had coined the term grandmother neuron which some of you brought up yesterday. The idea is that there should be one neuron in your brain, or this is the hypothetical situation you came up with, one neuron in your the brain that's firing if and only if you see your grandmother, no matter what she's wearing, which direction you see from. That's the neural correlate of you perceiving your grandmother is the activity of this one neuron. And there were other concepts like Jerzy Konorski gnostic unit that made the same point. Then Horace Barlow came up with this idea maybe it's not one cell, but multiple cells. But gave us a sparse representation of pontifical cells, a few of them at the top of a processing hierarchy. And that's actually how we recognize faces. And then of course there's the opposite view of Donald Hebb. He talked about cell assembly since there's no-- things are there like large assembles] of cells. Or Karl Lashley who talked about mass action and actually were completely against functional specialization. If you look at a plot like this, I think one of the things you want to emphasize is that these cells really don't fall into any of these categories. You can have cells that are very, very face selective but they don't have to be very sparse. They will appear sparse if you poke them over and over and over with non-face stimuli because they're not going to respond to those. But within the domain of faces, they're going to respond to pretty much all faces. There are differences between these different cells. I'm going to come back to that as well. But it's one example where we can actually ask them these deep questions about what is the neural code and quantitative matter by focusing on the right stimulus and the right place to look at it. So we have some evidence that monkeys, like humans, have face regions, and the monkey face patches appear to be dedicated domain specific modules. The practical implications of this is that now we have unprecedented access to function homogeneous populations of cells coding for one high level object category. And we know this category, we can make stimuli. And we can modify the stimuli sometimes in parametric fashions. And so we can have very deep insights into how these cells actually are processing interfaces, how they are restricting properties from these faces. And we can do causal tests and actually show whether these cells are involved in face recognition behavior. And we're just going to go over this very quickly. This is work of Srivatsun Sadagopan, he actually gave me this picture of himself. This combines the front view and a profile view. The logic is very simple. So we wanted to inactivate one particular region in a male. I'm going to tell you in a second why a male. While the monkey would be engaged in a task like this where it is to find a face in a visual scene. The visual scene that we constructed looks a bit like this. It's displayed on a touch screen monitor so the animal's free to move around. It has to find the face in the scene and the scene is composed of a pink noise background embedded in which there are 24 different objects. And the target object, in this case the face, is going to be varying in visibility across 10 different levels. We would have other tasks where the monkey body was included, which you will not be able to see here but there is a monkey body here or the monkey was looking for a shoe. Then we would infuse muscimol which is a pharmacological agent that is inactivating cells along with a contrast agent gadolinium which you can measure an MRI. And then this yellow region here is the face area now and this white region is the actual injection site that we used. And so this gives it a way for every experiment to control, are we inside the face area or are we outside. And we can use the outside injections as controls. What's been found is shown here. So we have a psychometric curve. So in normal behavior you're getting better and better at finding the face in the scene as you increase its visibility. If you inactivate, you get a reduced face detection behavior. I should emphasize that we only inactivating one face area out of 12 in the temporal lobe. We are only activating one hemisphere. And what Jim DiCarlo was emphasizing yesterday this retinopathy at this level of processing. So the animal can actually use a scanning strategy to go one direction and overcome this deficit. And so likely this effect would be much stronger if we had inactivated on both hemispheres or would have controlled precisely for the eye movement. But we are here for natural behavior. And the controls of bodies and shoes are, there is no effect there. We put lots of controls and we went to the next state of the behavior. We did the injections outside as I mentioned, it's very specific for inactivation inside the face area that the most basic of face recognition abilities, face detection is impaired. And so there's a way you might visualize it. So one way to explain this behavior would be that the visibility of a face like this would actually, with an activation, look something like this where it's going to be harder to detect. The second way we can take advantage of this is that we have now access to individual cells. We actually ask more precise questions about how they're processing faces. And actually there's an activation study that was motivated by earlier work we had done on selectivity of these cells for features that should be relevant for face detection. This is what Shay Ohayon did when he was a grad student with Doris. He's actually now a post-doc with Jim. And again, the question is how can you detect faces even when the lighting conditions are very difficult. There's beautiful work from Pavan Sinha that's emphasizing that coarse, contrast relationships in the face of very good heuristics to do that. The reason is that the 3D structure of our face stays the same even when the lighting conditions are changing. And so no matter whether light is shining from, typically the eye regions because they're receded relative to nose and forehead, are darker than nose and forehead. And so he found that in the human psychophysics you have 12 heuristics like this, forehead brighter than left eye, forehead brighter than right eye, nose brighter than mouth, and so on and so forth. Twelve of these characteristics that actually together can allow you to detect the face. And in fact, your face detector on your cell phone is using a very similar strategy as well. Trying to find the coarse, contrast relationships in the scene. So what Shay did was he started with a real face, he would parse it into 11 parts, and then randomly assigned 11 different luminance values to these 11 different parts and change this rapidly. And then the analysis would look like this. So no matter what the overall pattern looks like, he's going to look for a particular contrast relationship with the forehead versus the left eye. And he's going to ask is the neuron responding differently in these conditions where the forehead is brighter than the left eye versus in these conditions where the left eye is brighter than the left forehead. And you can do this for all pairs of combinations of 11 different parts. And in these 55 different combinations, we can mark by arrows the prediction from human phycophysics. So human psychophysics told us 12 of these constrast pairs are going to be important. It also told us what the polarity was that was going to be important for detecting a face. Again, forehead brighter than eye, or eye brighter than forehead. OK and this is what we actually found. So what this shows is a population diagram of all the cells that Shay found and it was half of the cells that he recorded from that showed some selectivity to some of these contrast features and to some of these contrast polarities. What's plotted here upwards is for one contrast polarity and one particular contrast pair, the number of cells he found that were selective. And as you go through the entire diagram, you see that there are only a very few examples of contrast pairs where actually different cells like different polarities. Like here for example you have more than 60, 70 cells that all like one polarity and not a single one that likes the opposite polarity. And this is true for all these polarities. So it's a very consistent pattern. Second, we can explain all the human psychophysics preferences here. Not only that, these are important dimensions for these cells. But in all these cases the cells care for the exact same polarity that they would have predicted from human psychophysics. In addition, there are other contrast pairs that apparently don't matter this much in human psychophysics, but these cells also care about. So they seem to be using these coarse contrast features. And again, they're very useful for face detection. Now we got the behavior that we know that the areas involved in the face detection with stimuli where it's actually hard to make out the detail. Then Shay did a control and I thought this was really the coolest thing ever. He's a computer scientist and so of course he knew about databases and how to use them. And he would say, OK, can we actually fool these cells into responding to non-face stimuli that comply by the rules of this coarse contrast of faces? So here are some examples. This is just a pattern where there are some dark regions where the human face eyes might be and so on and so forth. This is a pattern that only has one of these 12 contrasts correct. But also in human faces, you can find some-- when a person is smiling, wearing glasses, most of this contrasts are actually not in the face that should be there. So there are some very contrast correct faces and there are some faces that are not very very contrast correct. And now you can ask how does the response of the cells in the middle face patch change as you are increasing the number of correct contrasts, either on the face or non-face stimuli. And the answer is this. If you increase the number of contrasts in the face, the cells respond more and more. If you change the contrast of non-face objects the cells don't care. So there is something else that caused contrast that the cells care about, they're not easily fooled to respond to things that clearly aren't faces when the coarse contrasts are correct. And we could actually have predicted something like this should happen from an earlier study. So the first study we did where we took advantage of the fact that in a face selective area we can record from faces over and over again and that they have similar properties was a study where we looked at the effect of part and whole. This is one of the central features in psychophysical of human faces is that you can get information from the face without any detail just from the gist of the face. An example is again from Pavan Sinha. If you have a blurred face of a familiar individual, like some of you might recognize Woody Allen here, of course with the glasses it's a little bit cheating, but you can recognize him. And the other examples in the study were people who don't wear glasses, so you can recognize a famous face just from the gist of it. You don't need the details. On the other hand, we can process details. We can focus on details. And so how do these two things relate to each other? What we did was we constructed a face space, a cartoon face space, based on very, very simple geometric shapes. So these faces are just made out of ovals, and triangles, and lines. So very simple geometric shapes. But if they are put together they actually look like faces. Now we can parameterize this face space, we can vary certain parameters. So we had faces that change in aspect ratio. They go from Sesame character Ernie here to Bert. We have pupil size, like no pupil here, very big pupils. We have inter-eye distance here. So these eyes are close together almost like cyclopian fashion or they can be very far apart from each other, stretching the outside of the face and so on and so forth. And we would now randomly change these features, all these different features dimensions, randomly choosing a new value every time we show this face. It looks a bit like a cartoon character who is trying to talk to you. But the way we analyze this is very simple. So we just asked no matter what these other features are, for the first feature dimension mention does the firing of the cell change as we're changing this feature dimension. Then we asked this for the second dimension, the third dimension, and so on and so forth for all these slightly different dimensions. What we found is shown here for one example cell. So we had 19 different tuning curves. And of these 19 different tuning curves, for four are significantly tuned. For this particular cell it was face aspect ratio so didn't like Ernie, it liked Bert. It liked the eyes very close together, not far apart. It likes the eyes a little bit narrow, not wide. And it liked big irises, not small ones. What's very typical for how the cells are processing these features are these ram shaped tuning curves. So more than 2/3 of the tuning curves have this ramp shape. Which means that these cells are relaying the information that they are measuring almost in one to one fashion. This is not what the cells are actually doing. It's just a metaphor. But it's almost like they're taking a ruler, measuring eye distance, and they're relaying this feature in almost one to one fashion in their output. Another implication is that most of your coding capacity is actually at the extremes because many cells have big responses there, many cells have small responses. So most of the capacity is there. That's the range where caricatures live. And oftentimes we are better able to recognize individuals based on caricatures than the individuals themselves. So in the middle the face patches, they're causally and selectively relevant for face detection. The cells are virtually all face selective. Based on these two findings we actually suggest and it's a little like Nancy said as a strong signal you're putting out that you get backlash if there is. So we do think that there are modules that are there for face processing and face processing only. The gain of the tuning curve is modulated by the presence of the entire face. There's this ramp shaped tuning, which is very useful. The cells are sensitive to contrast relations which is very useful for face detection. So we can really get mechanistic about understanding face recognition. It's not just that we can say, OK, these cells are responding more to faces but we can say why they're responding more to some faces than to others. In fact, you can predict from the cartoon results how the cells are responding to pictures of actual people with the very fine details physically. And so at the level of the middle face patches we already have some of the requirements for face recognition system. So we have mechanisms for face detection, we have some encoding of facial features, and we have encoding of configurations. Nancy said I should talk about this. I'm going to talk about this. Sebastian Miller was a wonderful grad student with Doris and me. He asked the following questions. So whether the face pictures are related to each other or not. If you look at the overall organization these face errors are very far apart from each other. So the most posterior to the most anterior is one inch apart, it's a third of the entire extent of the primate brain. They live in different cytoarchitectonic environments. So you could also imagine that maybe the connectivity is mostly local. On the other hand, they are interested in faces and so you would imagine that maybe there are specialized connections between them. The way we addressed this was with micro stimulation inside the scanner. So we would first image the face areas. We would then lower an electrode to one of the face areas here, we record from cells to make sure it's face selective, but then we would use this electrode to pass a current through inside the scanner. Passing a current through an electrode is going to activate cells. That in turn is going to change blood flow and oxygenation. So things we can pick up on the scanner. And so yes, if this worked you should get a swath of activity around your simulation site. But if these cells at your stimulation site have predictions that are strong and focal enough to drive down some neurons, you might also find activation that spatially remote locations and they can then see where these locations are related to the face areas. So here is a computer flattened map of the brain of one macaque monkeys. The green areas indicate by outline the extent of the face areas. We placed our simulation electrode in one of the face areas and this is the map we got from micro stimulation, versus no micro stimulation. So there's no visual stimulus there, actually that works during sleep during complete darkness. So yes, we get a swath of activity around the stimulation site, and we get multiple spatially disjunct regions that are activated. So they're strongly driven by the cells in this region and these overlap with the face areas. And this was either be found very consistently across different phase areas if you stimulated outside you also got this patchy pattern of connectivity, but now it's outside of the face system. And so this is the picture that we got. Is that yes these face areas are actually part of a network of face areas that are strongly interconnected with each other. There's now data from retrograde tracer studies, we find that 90% of the cell bodies that are labeled after an injection inside a face area are inside other face areas or in the same face areas. So it's a surprisingly anatomically specialized and closed network. So what's happening in these areas, and again, my movie isn't going to work. So in this area AL was more anterior, also virtually all of the cells are face selective. But you have a property emerge here which you didn't have before. And that is mere symmetric confusion. And it's something that we did not expect. We're still puzzled by it. We have no explanation why it's happening. But in this area you have cells that like a profile view. And if they like one profile view, they also like the opposite profile view. And this region here as I mentioned initially that some of the cells did not really seem to be face selective. It's a small percentage. But actually these cells are selective for facial profile. But if they like one profile right, they don't like left. If they like left, they don't like right. In AL, this is being confused. And then if you go to AM, you have cells that respond to all faces. It doesn't matter where they are, doesn't matter who they are, doesn't matter how big they are. And the other cells that also don't care where they are and how big they are, but they care exquisitely about identity. So they can be very, very finely tuned to identity in particular to people that the animals never see in real life. So there seems to be a computation going on from here to here where in Jim DiCarlo's conceptual framework you could imagine that there's a manifold that's now becoming sort of flatter and more like a more explicit representation. And for some reason creating this has to go through this mirror symmetric confusion. I just want to highlight. So we meant to touch upon the question whether a face area should do different coputations from non-face areas. Actually, my intuition about this was actually quite the opposite. So I thought they would likely do the same computation or hopefully the same computations as outside the areas just in different material. So why in other non-face areas? Why don't you want to mix these cells? We have one study that's a little too complicated to explain here that gives some clues. But some computation work from Joel Leibo was a grad student with Tommy actually gave some clues to that. So Joel and Tommy were thinking about invariance. And Tommy told me he's going to talk about this at some later point in the course. The different kinds of transformations, easy ones and difficult ones. The easy ones are affine transformations. We're just shifting something in space or in size, or we rotate it in plane. And so if you learn how to correct for this transformation for just three dots of light, you can look for any image that you can ever see. So this is relatively easy. But then they're non-affine transformations that are actually changing the picture and they are very difficult. So if you change your facial expression for example, or if lighting conditions are changing, or if you're turning your head in depth, this is a non-affine transformation. So it's not predictable from just three dots. And you can learn something there that could tell you how this picture would look like under this non-affine transformation. And one of the insights from Joel was that if you learn this non-affine transformation on a particular object category-- let's say faces-- you actually have learned nothing about another object category like cars. That's actually quite surprising to me but there could be a reason why you might want to have all the cells that have to learn representations across one transformation put them all in one location. The second insight they had, and I think it's still very surprising to me that it actually works so easily, is to give the computation a count for the system that I just described to you in qualitative terms. So we have three levels of processing. So we have a front end where cells are useful for face detection, they're all very face selective. So you could think of this like a three level processing hierarchy where the level one is like a face filter that's just going to tell you it's their face or not. The top level you want an identification. And I didn't show the examples, again I hope with a connection to the monitor I can show you the actual movies. You have some cells that are very, very finely selective of facial identity. With pattern readout techniques you can read out identity extremely reliably. If you now have like Hebbian Learning Rule, maybe Tommy is going to explain this to you more, you actually get something pretty magic. So you do get invariance at level number three, which is kind of what you wanted and might not be surprised by. But as a byproduct, you are getting mirror symmetry at level two. And that's something you didn't stick into the system and it just happens, not like magic, but there's an explanation for why this happens, out of very general assumptions about the system. So the point I want to make here, this particular model could be wrong, but it's something about how knowing something about the overall organization of the system might actually reveal underlying relationships that you might not think about. So the fact that there are three levels of processing and not four or five or six might actually impact whether you find mirror symmetry or not. Or whether you find mirrir symmetry at one level or another level. And you don't necessarily get this automatically. Just put any processing system together. Becuase I was mentioning a facial motion, I would like to give a brief vignette of that. And so I was emphasizing here transformations along this direction. But you can see that at least two levels of processing there are actually two face areas here, one lateral to the STS, and the other one deep inside. And so one of the questions we had was what's going on here? How are they different? And so one way you might think about this is again, connecting faces to social perception. So there are some faces out there that are not really faces. And so the faces of dolls are just one example. So physically they are faces but you can actually tell that they are not really real, they're agents. And people like Thalia Wheatley are wondering about questions like why dolls are creepy. So there is an expectation that the face should belong to a real agent. And there are different clues that can give it away. So if the face is on top of a body, more likely it's an agent just like an artificial stimulus. If a face is moving that's another clue it's an agent. And this will change fundamentally how you interacting with it. Your interaction with the doll is likely going to be very different than with a baby. And so again, objects and their meaning are making them actionable in different ways. And we have to understand what the circuits are that actually make this possible. So one way to look at this again is to think about these facial displays. I showed the Tonkin macaques and Clark Fisher was an M.D. PhD student in the lab was actually addressing this question. And so he made movies like this one here. Luckily there's no sound so we can actually play them. These are movies of macaque monkeys making facial movements of all different kinds of facial expressions. And we then also have stills that are just changing from time to time. And we have controls of toys that the animals also know that are either moving or that are jumping every second from one state to the other. And we would ask as we had in an earlier study, are these areas responding to this motion differently than to the static images. So here's what he found. So he had six different face areas he was looking at. If you're looking at static form selectivity we're just reproducing the way that the area was found just with a different stimulus. So all the six areas respond more to faces than to objects. If we now compare moving faces to static faces, all the arrows are responding more to the moving faces than the static ones. Some quantitative differences, but overall the same pattern, more responsive to moving than static stimuli. If you now compare on the right inside the modulation by moving objects through static objects. You can see that also all the areas or almost all have a slight advantage of moving objects over static objects. There seems to be a general motion sensitivity there. But if you do the interaction of shape and motion, you can see that all the areas are selectively more enhanced by face motion than by non-face motion. But they all look pretty similar. So now you can actually wonder if you have a contrast like this, like moving versus still, there a couple of things that are different. So is it really about motion or is it just about the content? If you just show a picture every one second you can say, well, there's less content there, therefore you might have more adaptation, therefore less response across the board. Is it about update frequency? You know, a fast update versus a slow update. So what Clark did, he was creating another stimulus. If you think about creepiness, it's actually like a little bit creepy. So like a scrambled version of a motion that's shown here. Shows us the same frames of the movie but just randomly associated with each other. So if anything, now the motion energy in this thing is higher than in this one here. And we can look now for the contrast of those two and also the contrast through here. And now what he finds is actually something where the face areas are qualitatively different. He finds two face areas here which are responding more to the natural motion than unscrambled motion, and three face areas that are responding more to the scrambled motion than the natural motion. So they shift opposite preferences. What we think is going in the areas here that remember also the benefit for facial motion over static faces, that they just like a fast update of content. Ideally in a way that's not predictable. If you show me something new I'm going to respond. And if it's something that I can't predict even better. So this is what these guys are doing. But these ones here, they seem to be really sensitive to facial movement and naturalness of facial movement. And that was not smart. And these areas are located more deep inside the STS, more dorsally, and these areas are located more ventrally. So there's an organization and to discover the new face area they didn't know before. It's a seventh face area which he called MD which is really like a face motion area. There are lots of reasons why we're excited about this. I mentioned the link between face perception and agency interpretation. This is one possible link, there are more. It might be a second phase processing system, I'm going to go through all the evidence. It's only indirect, though that this area might not be connected to the other ones. I told you before that the six face areas are intricately integrated to be one network. This area never showed up in stimulation, therefore it might be separate. And this is kind of nice because it fits very nicely to the human situation. So in the human brain you have the posterior STS face area which is exquisitely sensitive to facial motion. Actually you often don't even get it for static faces. But it's very sensitive to facial motion and actually Nancy has a beautiful study on that. And this area by several accounts is not like the other face area. So it seems to be like a specialization. That's another thing, another reason why we think these systems might be connected to each other. Just a cool thing in the end, who can recognize this actor here? Show of hands. OK, who can recognize him now? So facial motion gives away a lot of things like identity. Jack Nicholson has very typical facial movements. So it's not just agency, it's not just facial expressions, it's also identity and lots of things that can go away. So we actually don't know yet what these areas are doing. So my summary, and I'm sorry I'm going into lunch. So we can do fMRI on macaque monkeys just as in humans. What we find we can apply to lots of domains within attention studies, found new attention areas. Here we applied to face processing, we find face selected areas for face processing. Recording the micro stimulate-- so inactivating these regions is supporting the notion that these are likely modules that are selective for processing faces and faces only. These are interconnected into a face processing network. It looks like all these areas have different functions and specializations. So fMRI experiments are notoriously under powered in the way of number of different dimensions. So if we call them face areas it is not to say that they're all doing the same, but they likely all have different functions and likely sub regions of these different functions. So again, there's no contradiction here to the view of a fine organization. Then there's a seventh face area which doesn't seem to be connected. Could be separate and separate face processing system. We have evidence for processing that we can understand now in computational terms. And this is one way that it can link, sometimes causally sometimes correlationally, activity of single cells. Two different levels of organization to a very complex social behavior. And that's, again, I think a very cool opportunity to have in the domain of social cognition that you can actually control stimuli very well because faces are so powerful to get into your social brain, you can likely take this approach deeper and get insight into actual social intelligence beyond face perception this way. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_16_Matt_Wilson_Hippocampus_Memory_Sleep_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. What I'm going to be talking about is some of the fundamental work that has been done trying to understand neural basis for memory and spatial perception and cognition that comes largely from behavioral and electrophysiological studies in the brain system that's shown here-- the hippocampus. And as I was mentioning before we started, this year's Nobel Prize was given to John O'Keefe, who discovered the properties of individual neurons in the hippocampus by applying these methods for recording the discharge of single cells by putting little wires into the brain-- so-called extracellular recordings. You put in a wire. The tip of the electrode can record electrical discharge of cells. You measure the discharge of cells as animals move around in space. And you try to figure out what this part of the brain does. Now where the observation part of science comes in in this case is interesting, because it relates to another aspect of neuroscience. And that is the relationship between the neurobiology and behavior. And this is an approach known as a neuroethological approach. And that's studying the brain systems in the context in which they actually evolve and were used. And so that could apply to the use of song as a mechanism of communication in birds. It can come from the finding prey in the dark, when it comes sound-localization owls. In the case of rodents, O'Keefe appreciated the fact that the hippocampus in rodents really has a primary role in spatial navigation-- that animals that have damaged the hippocampus have problems with spatial navigation. And the hippocampus had been studied up to that point using animals that were head-fixed. So you take a little rat. You fix his head. And the reason for fixing its head is for convenience, largely, so that when you place these little electrodes into the brain, the animal doesn't move. They don't move. Electrode moves, you can't get good recordings. So you need to keep the preparation fixed. So you fix the animal's head. Then you figure out ways to actually look, to study the system given the constraints of the methodology. And that involved, largely, before John O'Keefe in 1970, using methods of classical conditioning. So you might be familiar with basic learning theory. And how do we learn how to do things? Well, it's basically chaining together stimulus response associations. You see something, you do something, you get rewarded for doing that. You're more likely to do that again in the future. This is a basic Pavlovian conditioning. I ring a bell, you get food. You associate bell with the food. And so that was the thinking-- that all of cognition can be built up from basic stimulus-response associations. But there was a movement around the time that O'Keefe was doing this work that proposed that that was insufficient-- that simple behavioral simple stimulus response learning was insufficient, that there was some kind of internal foundation upon which learning was applied. And this was the so-called cognitive theory of learning and memory. And the hippocampus was posited beta-site of one property of this cognitive learning as applied to space. And the observation was actually a fairly simple one. If you take a rat-- I see you want to learn something. That is, if you want to learn to associate a cue with food or you want to train it to go over and press a lever for food or-- you want it to learn something, and you do this in an environment and put it in a box. Well, if you take a rat and you put it in that box, let's say, the day before, and just let it wander around. It does nothing. It just explores space. You take it out. And now we take two rats-- one that's been in the box before and one that has not been in the box before-- and the rat that has been in the box before will learn faster than the animal has not been in the box. And you say, why would just passive exposure to that box enhance it's learning? And this phenomena is known as latent learning. The animal had learned something that it could then apply to this new learning even though it was not instructive. It wasn't rewarded for doing anything, it just explored. And so this idea that there was some sort of latent capacity to enhance learning was motivated the study of the hippocampus in the context of non-head-fixed recording. And so it keeps real inside. In fact, the paper that was cited for the Nobel Prize was the paper in which he first recorded from these cells-- and all he did was just take the animals out of the ear bars. So he took the animal out of the ear bars and just let it run around on a tabletop, just like this, actually. It was a table at the University College London about this size where the rat just kind of wandered around. And he made observations-- oh, here's a cell that fires when the animal goes over to the left hand side-- very descriptive, but the key insight was to study the hippocampus when rats are doing what rats normally do-- explore in space. So it was this ethological approach. And what he discovered was that cells in the hippocampus.-- and this is a cross-section. The hippocampus is found here. It's in the medial temporal lobes. The name hippocampus comes from the Greek for seahorse, so in humans, it sort of looks like a seahorse. The regions here-- the terminology CA, these different fields of the hippocampus, comes from cornu ammonis, or Amman's horn. That's because it looks like a ram's horn. So if you think about it, it's like little ram's horns right here, put them in the temporal lobes and you move them in there, that's where your hippocampus is. Just like this. And if you make a cross-section, slice through that, this is the circuit. So information comes in from across the brain, converges in the primary input to the hippocampus-- that's called entorhinal cortex, which is what the other Nobel Prize was awarded to-- the husband and wife team that had discovered and identified the properties of cells in the entorhinal cortex, these so-called grid cells that seem to carry information about actual, it seemed like, Cartesian-like spatial information conveyed in the hippocampus. So stuff comes in from the rest of the brain, converges in the entorhinal cortex, and then goes around this little loop, from these three primary subfields-- the dentate gyrus, CA cornu ammonis 3, CA1, subiculum, and back on again. So it goes through this loop. Classically, it had been referred to as the trisynaptic loop. One synapse, dentate, CA3, CA1, and then back out. And again, recordings in this part of the hippocampus reveal the properties which I'll mention, and that is that the cells respond to locations in space. But prior to that-- this electrophysiological work in rodents-- there had been human neuropsychological work, in particular the seminal case of the patient HM who had been studied here for many decades before he just recently passed away. Known as HM, or Henry Molaison, as he is sort of recently revealed to be a patient who had undergone bilateral resection of medial temporal lobes. Cut out parts of his hippocampus and other associated medial temporal lobe structures to treat intractable epilepsy. He subsequently lost the ability to form any new memories. So had permanent anterograde- couldn't form any new memories going forward in the future and lost some of his older memories. So you lose memories in humans, rodents can't navigate in space, also humans have a spatial deficit, so there's some connection between space and memory. And the question is, what is that? What would link spatial navigation and memory? But not just any kind of memory-- what we refer to as episodic memory-- memory for events or experiences. And so the working hypothesis is that these two things are really connected by a need, a computation imperative, to maintain information about time order. And that is, if you're going to use experience to guide future behavior, you need to figure out what the causal relationships are between events in the world. If I see A and B, what I really want to figure out-- I don't just want to record the fact that A and B happen together. Ultimately, I want to try to understand the relationship between A and B. Did A cause B, or how might I actually predict B given A? And the way I would do that would be to construct some kind of simple internal model, a predictive model, that's based on experience. And so the idea is, hippocampus captures experience, and then sort through some process, which we'll refer to as the process of consolidation, translates experience into some working model that can predict events and can be used to guide behavior in decision making. And critical to that is just the idea of time and a particular time order. So as I mentioned, the use of very simple technology, in this case extracellular neurophysiological recording-- taking a tiny wire-- this wire is actually four small wires, each one about 10 microns across. You twist them together in a little bundle. The bundle is about 35 microns. Human hair is on the order of typically about 50 microns, so these are wires, or multi contact electrodes, about as large as a human hair. We thread these things through the little oil rig drilling device here, which is-- these little micromanipulators allow this wire to be driven down through very small guide cannula, and so out the bottom will be a number of these independent individual electrodes, each controlled by their own micromanipulator. So we can send these electrodes down-- this entire device, in this case, weighs anywhere from 12 to 20 grams, and this can be placed on an animal's head permanently or chronically, and that is that once it goes on, it doesn't come off. So they will have this little helmet like thing on their head, a small opening is made in the skull, the wires are sent through, and the whole surgical procedure takes maybe 30 to 45 minutes. It's like an outpatient thing. It would probably take longer to have your wisdom teeth pulled than to have a brain implant installed. But then once this is installed, these electrodes can be driven down and placed permanently. It gives you the ability to monitor pattern activity across large populations over long periods of time-- days, weeks, months. So you have tapped into activity in this brain area, and you can monitor as animals experience, learn, and recall. This is what the raw data looks like. This is a cartoon of the electrode with four contacts. The idea with the four contacts is it gives you the ability to triangulate location, much as you stereoscopically can determine depth. These four contacts will give you the ability to triangulate things in three dimensional space. One of the properties of this hippocampal circuit, and that is-- so this little cross-section-- you imagine a neuron here. And so there are going to be, again, neurons distributed across the hippocampus. In the rat, there are on the order of maybe 200,000 cells in area CA1. In the entire hippocampus, there are on the order of one to two million cells. And if you drop an electrode any place into the hippocampus, you will find cells that have this kind of spatial property. And so what that says is that spatial information is distributed not in a topographic way. In other words, it's not one location in space is mapped into one location in the hippocampus. Unlike some of the sensory areas that have this kind of topography, for instance, if you record in the visual cortex, where there was there would be some correspondence between location and visual field, and the location of cells in the cortex that respond to that so-called retinotopic mapping-- visual field mapped onto the anatomy. Same thing with the somatosensory system. As I move and I touch different parts of the body, the cells that respond to that will be mapped out in a largely one-to-one correspondence between the adjacency of stimuli in the input space and adjacency of the representation in the neural space that does not occur in the hippocampus. Two cells that are right next to one another are no more likely to respond to nearby locations in space than two cells that are distant from one another. So, as we'll discuss, the principle of information representation in the hippocampus appears to be one of kind of sparse distributed patterning, that you have lots of cells that will respond to different environments and individual cells don't have a unique relationship to locations. It's really the pattern across cells that gives you a unique signature of code. We're taking advantage of the fact that you don't necessarily have to place the electrodes at the proper place in order to get responses in some location in space. Anywhere you put these electrodes in the hippocampus, you're going to get a certain fraction-- generally about 30% of the cells will respond in the given environment. And this is what those responses look like. I won't go through the technical details, but, needless to say, this just shows how events, action potentials that are detected-- in here, you see a voltage trace, you pick out the amplitude of these little voltage transients generated by action potentials in the cells, you plot the amplitude across these different channels. In this way, each point is an action potential, and what you see is that the amplitudes will cluster in a way that reflect the relative position or location of cells-- the sources relative to these wires, using the basic principle that if you're closer to a wire, the signal is going to get stronger. So amplitude is essentially inversely related to distance. So here, this is the amplitude of an action potential plot across two channels. These points are larger on channel one, small on channel 2. That means it's close to channel 1, far from channel 2. And then different cells will have different relations. This will be large on channel 2, small on channel 1, et cetera. So you can figure out where these cells are in space, means you can pick out lots of cells from, in this case, maybe 12 to 18 electrodes can give you 50 to 100 or more cells. And then looking at the activity of those cells in a simple box-- this is just a little box, one wall removed, little ceiling of cubes, simple architectural design, nice, clean and simple-- what you get is clean and simple mapping of spatial locations into these neural responses. So each one of these panels represents the activity of individual cells. So this is about 80 simultaneously recorded cells. The color of the heat map indicates the firing rate of these individual cells. Red indicates high firing rate, blue indicates no firing. So this is a top down view of that box. So this cell, for instance, when the animal is wandering around in the box, the cell is silent in all the blue areas, and when it goes in the lower right hand corner, the cell fires vigorously. So silent, fires vigorously. This one fires on the left hand wall. This one also fires on the lower right hand side. So this points out the combinatorial nature of this spatial representation, and that is that if the animal is in the lower right hand corner of this particular box, we'll get these two cells to fire. If I take the animal and I put it into a different box, all of these responses will be scrambled up. There's nothing that says this cell will fire in the lower right hand corner of another box. And certainly, even if it does fire in the lower right hand corner, this other cell isn't going to fire along with it. And at any given location, there are roughly 1% to 5% of the cells that are firing. So at any given location, there are about maybe 5,000 cells in the hippocampus that are active. So the unique location and unique context and environments can be conveyed across a unique pattern of about 5,000 cells out of 100,000 or 200,000 cells. So there's large combinatorial space in which one can represent unique locations or potentially even unique experiences within different locations. You see that many cells are silent. On average, about 30% of the cells respond in any given environment. The silent cells, as we'll see, they can be detected when the animals are not in a running around or experiencing space, but when they're asleep or in these other offline states, when the hippocampus is actually thinking about other stuff. We'll talk about that a bit. And then here, you can also see a small number of these cells. In this case, about 5% to 10% of these cells seem to have elevated firing rate across the entire space. These are actually a different class of neurons. These are excitatory neurons. These are inhibitory neurons. The inhibition, the idea that you have circuits that can both excite and inhibit, and that this is used as a circuit property to sculpt computation is something that we'll also discuss. So there's balance between excitation as kind of a circuit principle that's used. Inhibitory cells fire all over the place. They're not really communicating information. They're really modulating the processing of information. And another property of these cells that O'Keefe termed place cells is that when animals are constrained to move along these limited paths-- in this case, this is a linear track, as we refer to it. It's like a little corridor, and it was moving down this corridor. When they move down a corridor, cells don't only fire where the animal is, but also in the direction that it's going. And so this is the first indication these cells are not just-- again, it's not just a GPS. It's not telling you where you are. It's at least telling you where you're going, maybe what you're doing. So here, if I look at this yellow cell, I can tell that the animal is not only in this location, but it's moving down this linear track in this direction. So there's going to be a unique sequence when the animal walks along this path-- yellow cell, red cell, green cell. So the different cells are color coded here. So if you look over time, you'll see that there will be a unique sequence of activity in the hippocampus that reflects the animal's actual behavior experienced in that space. So here, this is just a little movie that shows raw-- this is what you would actually see if you're running the experiment. This is a little top down view. The green circle just highlights where the rat actually is. The color coding-- these are the cells that we're picking up over here. This is data as it would be coming out of a set up. You see, this light blue cell fires here, the dark blue cell fires here. Dark blue cell, light blue cell. So this is the spatial firing property, place cells. So the animal has stopped. The one thing you will notice-- you saw lots of activity, and animal stopped there, there was this big burst of activity. He'll stop again a little bit later. Now he's moving. Now when he's moving , if you listen carefully, you'll hear that there's this background rhythm. There's a modulation that's going ch-ch-ch-ch. Background modulation activity which is associated with locomotion. So there are really two modes-- you're actively engaged or when you're taking information in, you get this rhythm. When you're inattentive, not engaged, not taking information in, but internally evaluated, the rhythm goes away and is replaced by these bursts of activity. And so these two modes, active attentive-- you're processing information coming in, inattentive-- you're evaluating information, you're thinking about stuff-- can be reflected in these two characteristic modes, which you can literally hear. You can hear the difference between the two. Obviously, I've been listening to these things for a long time so it's very easy for me to pick up. It might be harder for you but, you listen to it a little bit and it's very easy to distinguish these two different brain routes. So this is going to be another view of that data. So same data, only now, instead of showing you the raw data, where is the firing. And that's one thing about this correlate that's so compelling. That was literally raw data as it was coming out. There's no processing at all and you could see the correlate, you can see the spatial correlate, which tells you, this is not something that requires I have to do multiple regressions, and show you all of the-- give you some sort of statistical confidence that this is what the hippocampus is doing. You can literally see it in individual cells. The spatial correlate is extremely robust, compelling, and consistent. Those cells weren't selected. If you record any of the cells in the hippocampus, they're all doing that. The estimate is that over 95% of the cells that you can record in rodent hippocampus will have these spatial properties. So it's really a fundamental property of this memory system. So this is the same data, except now, instead of showing the spiking, we're going to use this simple Bayesian Estimation Algorithm to ask the question. "OK, if we know which cells are firing, can we guess the location that the animal would likely be?" That is, if we know the firing as a function of location, we can use a Bayesian inference to guess the probability of location, given firing. And that's what we're going to do here. Just asking, if we know which cells are firing-- for instance, if the blue cell is firing, you would say, "Oh, the animal's probably over here." So recording across, looking at pattern across many cells, and then we're coming up every 100 milliseconds, we're coming up with a probability that that pattern could have occurred if the animal was at any location on this track. And the probability is going to be shown by a triangle. Big triangle means high probability. And what you'll see is, when we do this estimation, when the animal's moving, you see the triangle. The triangles are only highly probable when the animal is actually moving, and only at the location that the animal is. So the hippocampus is centrally-- the hippocampus representation is tracking the location the animal and also when the animal stops. Now, if you listen, you heard the burst and you look at the triangle, when the animal stops, the triangle no longer corresponds to where the animal is. In fact, when you get the burst, you see the triangle jumping around the track. So again, there's these two modes and the representations have these two properties. Moving, oscillation, track current location. Stopping, oscillation goes away, representation now jumps across the track. And interestingly, these little bursts, like that, also occur, not just when the animal stops briefly, but when it actually goes to sleep. So here on the inset, this is the animal now has been taken off the track altogether. It's just sitting in a little box somewhere else, it's curled up, and it's asleep. You get these same bursts and when you decode activity, you find you can decode the position on the track and, if you look carefully, you see that the position actually follows a sequence of trajectory along the track. And we'll go into that in a little bit more detail. So you think of there being these two states, the offline and the online. And the online, when the animal's moving, the characteristic mode of activity in the hippocampus is an oscillatory mode, described as the theta rhythm, which is this 10 hertz oscillation. When the animal stops and becomes quiet and immobile, within about half a second, oscillation goes away and is replaced by these large transient aperiodic events, these so-called sharp waves, because in the extracellular electrical field that you measure, you see these large deflections. And then, if you zoom in-- I'll also show you shortly-- you can actually see that these very high frequency oscillations, about 100 to 200 hertz, riding on top of that, these are referred to as, ripples. And this is the term that Gyorgy Buzsaki came up with. He described this sharp wave ripple activity as corresponding to this offline state, quiet wakeful, and some sleep states. So the first thing is to look at this oscillatory state. So an animal's actually moving. And I've already shown you the spatial correlate, these place cells. But there's another property of these cells that O'Keefe also discovered. But this was in early 1990s, around 1991. And that is, he noted, if you actually look at the discharge of single spikes with respect to this oscillation, first of all, the spikes are actually phase locked. In other words, here spikes fire when the oscillation, this theta rhythm, is at its peak. So the idea is, this oscillation really reflects time varying excitability, sometimes where cells likely to fire, other times when it's not. And you can think of this oscillation in excitability as being an oscillation in relative excitability, or inhibitions. So inhibition, elevation is high, it's low, its high, it's low. When inhibition is low, cell fires. When it's high, cell doesn't fire. So this is modulation excitability. You see this here. But he noted another thing, and that is, if you look at the precise phase-- so here, yes, cells tend to fire at the peak, but here, this is now over time, but animal moving at constant velocity, space, and time, are interchangeable. So as the animal's moving through it's place field, the spikes start to fire earlier and earlier, as this phase code, spatial location. So you can tell, based upon the relative phase, whether the animal is just entering the field, spikes fire late, here close to the peak, versus further into the field, where they fire a little bit earlier. There's a relationship between distance into the field and relative phase. What he termed, phase precession. And this is a representation of that. This is actual data. This is just a cartoon that illustrates the basic principle. And so the idea that, if I have a place cell, if I look at the marginal distribution as a function of location, this would be the place field. Otherwise, it would just collapse all of this. This is the spiking now, as a function of position and phase. If I just look at spiking as a function of position, as just the density of firing, what I would get is a place field. Not many spikes here as the animal enters the field, lots of spikes here as you get into-- this would be the classic spatial receptive field. But now, if I introduce phase, as well, now you say, "Oh, wait, there's a systematic relationship as well between phase and location." In fact, phase is a better predictor of relative location than firing rate. If I asked, when did the spike occur? If it occurred here, late in phase, then I know the animal is just entering the field. If it occurs early in phase, I know it's at the end of its field. And that's interesting. And this is a simple model that says, well, an easy way of explaining that is, if you just have this excitability model-- and this was one of the questions that I guess you asked about this sweeping inhibition. So if we just imagine that you have an input, excitatory input, shown in blue, and then an inhibitory input, shown in red, where the inhibition is oscillating, and you apply a very simple biophysical model that says, a spike action potential is going to occur when excitation and blue exceeds inhibition and red. More excitation inhibition, you get a spike. So here, in this oscillation, anywhere the red trace is higher than the blue trace, no spiking. So in this case, when excitation is low, you have to wait until inhibition drops all the way to here to get a spike. This is late-phase. So weak excitation means you have to wait until inhibition is low. So this is the sweeping inhibition, you have to wait until it's low. When excitation is strong, I don't have to wait so long. It can fire earlier. So the principle here is that there is a relationship between magnitude and latency. Stronger means earlier. That's the biophysical principle. Very simple. Stronger earlier, weaker later. That will give you this phase precession property. Phase precession, you might think of as this biophysical curiosity. But when you think about how that would apply when you have more than one place cell-- in fact, I have two place cells here, one in blue, one here in purple, where place cell one is to the left of place cell two. So as the animal's going through here, first the blue cell will fire, and then, the purple cell will fire. And if you look at the excitatory drive, shown here as a ramp in blue, ramp in purple, and now you ask, "When is the blue cell going to fire, and the purple going to fire, during each one of these oscillatory cycles?" The answer will be, well, because the blue's-- the excitatory drives the blue cells higher than the purple cell, blue is always going to fire before purple. In fact, in each and every cycle, it will be blue purple blue purple blue purple. So this principle of phase precession, or phase coding, for single cells, actually gives you sequential encoding across a population that you will actually on each and every cycle have a sequence. The code in the hippocampus is not a location, it's actually a sequence, a trajectory. This is just raw data, but I'm going to quickly go-- this is just talking about what the spiking looks like. And you can see, if you look at it, you see these spike sequences. But the property's more clearly demonstrated when I do this decoding. So what I'm showing you here-- this is, again, raw data. But now, instead of showing the spiking, I'm showing you that the result of doing this decoding, this Bayesian decoding, where there's a probability estimate that given spiking activity in each one of a set of successive 20 millisecond windows-- so I'm walking along, now I'm firing at a finer temporal resolution every 20 milliseconds. I'm doing the same decoding, probability is indicated in grayscale. So you can see here the dark areas, this is high probability that this pattern-- the animal would have been in this location, or to get this pattern of activity in this 20 millisecond bin. And so, what you can see here is that the probability, these Bayesian decoded probabilities, form short sequences every single theta cycle, what we refer to as theta sequences. And the theta sequences actually move from just behind the animal-- dotted line is where the animal actually is. The estimate goes from just behind the animal to just in front of the animal. So 10 times a second the hippocampus is actually expressing a representation of spatial sequence that goes from behind to in front of the animal. You can think of behind and in front as also reflecting recent past and near future. So there is this relative predictive differentiation of response as a function of oscillatory phase. So if I want to look at, "Gee, where did I just come from?" I just have to look at activity here, at this slightly earlier phase. If I want to ask the question, "Gee, where am I likely to go to?" I simply have to shift the phase, the channel that I look at here, and I can see where the likely future location is. So you can think that there is actually a code, not just of location, but of the relative causal relationship between locations mapped into phase. And we'll see that we can experimentally test this idea, that that's just not an artifact of our decoding, but the animals are actually using information of these different phases to drive spatial navigation in particular ways by using some of the tools for manipulating activity in a closed loop optogenetics. So we see we can manipulate activity, specifically manipulate activity, hippocampal activity, different phases and show these different phases actually carry different functional consequences. So we've got these sequences. So we've got these sequences captured during these oscillations. Are these things actually meaningful? Well, some of the indications that, A, the oscillations in the sequences are meaningful first come from the observation that, as I mentioned, actually successful, using memory-- information that the hippocampus requires that you actually communicate this to executive structures that can guide behavior, make decisions. That would include the prefrontal cortex. The portion of the prefrontal cortex, which form part of this limbic circuit, referred to as the limbic prefrontal cortex, which have direct connections from the hippocampus. These structures are-- in the hippocampus, you have deficits in spatial learning memory. Prefrontal cortex-- you think of this as deficits in working memory and retrieval, executive control, decision-making. But you can think of these two things as working together. Hippocampus providing information that the prefrontal cortex can use in order to direct behavior and decision-making. And you damage either one of these things, animals, rats can't find their way around the space. You damage either one of these in humans, you're going to get memory deficits. Dementia is-- when you think of dementia as being problems of cognition and memory, you can get them temporal lobe dementias, frontal lobe dementias, you can have-- they're really very closely related. And so we can look at a simple task. This is a simple task, testing a so-called working memory. Just remember, what did you do last? And this task, it's just a simple alternation, where you turn left-- first time you turn left, next time to go right. Ethologically, it's like, "Look, I just got food over here, why don't I check out the other place? I'll check out places where I didn't get food." It's a so-called win-shift strategy. I got something here this time, I'm going to go someplace else. As opposed to-- a reference memory strategy would be referred to as a win-stay. That's really a good spot. Home. I love going home because I've got my TV, I got my microwave, I'm going to stick with that place. Win-stay, you keep going back to the place that rewards you. And so you can think of both-- again, both of these things have ethological value. Home is a good place, that's a reference place. If you're foraging, win-shift, don't go back to the place I just looted, right? Again, these two structures-- classically, you think of-- in the prefrontal cortex we think of working memory cells. And this is the short term, the idea of short term memory. So prefrontal cortex typically has been thought of as subservient. Working memory, where working memory is about holding information over short delays. It's an overlap in terminology, where working memory in the prefrontal cortex is really thinking about time. Working memory in hippocampus is really thinking about the context in which it's used, relative, session specific information, trial specific information. So I'm not going to go through the details. You have these two systems. One interesting property, if you look at these two systems simultaneously, what you find is that the same idea about oscillatory phase governing the firing of cells in the hippocampus also applies in the prefrontal cortex. That is, cells in the prefrontal cortex like to fire at a certain phase of the hippocampal theta rhythm. They care about, their listening to, this rhythm in the hippocampus. And so, we can look at a task. This is basically the same kind of task, only now there's a back-to-back T-maze. So there can be two T-mazes, and we're just going to do a simple variation on that, on that working memory task, in which the animal's going to start from one of these two arms-- it's going to walk down this arm and then, it has to go to the arm that's on the same side as it started from. So it has to remember, "Where did I start from? Oh, that's the side I have to go to." Then it's going to turn around, it's going to come back, run down here, and then, we're going to force it into one of these arms. So two back-to-back T-mazes. In this direction, the animal chooses, in this direction, we choose. In this direction, there's a working memory demand. It has to remember where it came from. In this direction, doesn't have to remember anything, doesn't make any difference. So behaviorally, it's symmetric, but cognitively, here there's a working memory demand, here there's not. And then we're going to look at all these things, oscillations, spiking, and the oscillations, look at the peak time, look at the firing with respect to that, firing, spiking, as a function of this oscillation in both structures. The bottom line is what you find is, that this is a measure of relative phase locking and as how well do prefrontal cells actually lock to this data oscillation. And what you find is that the degree to which they locked the theta rhythm is a function of whether or not the animal has to choose. So the red ones are when the animal's going down the arm and it's got to choose. The gray one is when it doesn't have to choose, when we choose. Then in addition, the solid red is when the animal had to choose and it got it right. And the stipple red is when it had to choose and it got it wrong. So the degree to which the prefrontal cortex actually locks, successfully locks, to the hippocampal theta rhythm, predicts whether the animal actually makes the correct choice or not. So it's as though this is a channel that is necessary for effectively communicating information between the two structures. And, that not only is it in the blocking of the spikes, spikes in the prefrontal cortex, the rhythm in the hippocampus, it also comes in the blocking of the rhythms themselves. So you can think of there being-- this is the theta rhythm that you can detect in the prefrontal cortex and in the hippocampus. And what you can see is there are two conditions. One, the animal's actually choosing, is running down here, it's making a choice. And in about the half second before the animal makes a choice, you see these two rhythms actually lock, they become coherent. So transient coherence is predictive of correct choice behavior. The thinking is that the rhythms themselves can be generating coordinated through the regulation of these local inhibitory circuits. In fact, there's been a lot of interest, for instance, in the relationship of local inhibitory control and neuropsychiatric disease and disorders. So, for instance, disrupting local inhibitory rhythms. In particular, some, for instance, the theta rhythms in prefrontal cortex can be associated with disorders like schizophrenia, for instance. So the inability to effectively impose these modes and then synchronize these modes can introduce disruptions in the ability to communicate or use memory. Cognitive disruption coming through disruption of these oscillations through disruption of inhibition in these local circuits. Now how you actually coordinate these two subjective states. We've been looking at the role of mid-line thalamic nuclei. So the thalamus as being a set of structures that have widespread connectivity to all these cortical areas, that much of their connectivity is inhibitory, and so they have the ability to actually coordinate, modulate and coordinate, these oscillatory modes. We've even just recently published on individual cells we found in some of these mid-line thalamic nuclei that, for instance, will branch. Single cells will branch and target cells in the hippocampus and the prefrontal cortex. So they would be ideally positioned to introduce, to select and impose, this synchronization there. So that's the way we think about a lot of this dynamic connectivity being established, presumably through some sort of thalamic regulation. And then, you think about the thalamus as being regulated by those so-called thalamic reticular nucleus that regulates the thalamus and it sets up these modes. And there's also a lot of interest now in the thalamic reticular nucleus in disease and disorders. A lot of the genetic screening has identified targets in the thalamic reticular nucleus. So you can think about it again. It's like the oscillator in your computer or your radio that breaks down, you can't-- the information can be there, but you can't tune into it. So it's this fundamental tuning. You've got to have the modes and you've got to be able to lock to these frequencies. And then, beyond that, as we'll see, it's not just the frequencies, but it's also the precise phase within those oscillatory modes that carry different information. That we actually determine by taking advantage of these techniques, optogenetic techniques, for targeted manipulation. So you infect the inhibitory neurons in the hippocampus with this optogenetically encoded and controllable channel so that we can optically excite inhibitory cells. So we infect this excitatory channel into inhibitory cells, and then, we can transiently, giving very brief pulses of laser light, we can activate drive, inhibitory cells, and then, because the local circuit, those inhibitory cells will inhibit the excitatory cells. And so we have about 20-25 millisecond control. And now, we can lock, or control, inhibition based upon the phase of this oscillation. So the idea is, we're going to selectively disrupt, or inactivate, the hippocampus at different phases. We're going to ask, "Do those phases, do they differ in terms of their contribution to behavior and performance?" So here, we're going to lock inhibition to either the peak of the trough of this state of oscillation. And so in this task-- and we're going to do this manipulation, that is, selectively inhibit-- you're picking hippocampal output at either the peak of the trough, the theta rhythm, at two behavioral phases. So in this task, animal's going to start on one of these arms, going to run up, and it's going to choose. So we're going to think about the starting arms as-- we'll refer to this as the encoding phase. This is where you have to keep track of where you are. And then, here in the central arm-- we'll refer to this as the retrieval phase. This is where you have to remember, "Oh, where did I come from?" And then use that to decide where you're going to go to. And what we found was pretty surprising. And that is that, you might think, well, if you shut off hippocampal output, turn off the hippocampus, you're going to get an impairment. It's just like in the examples that I gave of suggesting hippocampal's prefrontal cortex. Most of those come from lesion studies. You damage the hippocampus, the animal can't find its way around. So if you were to optogentically lesion, or turn off the hippocampus, you might imagine, OK, animals won't be able to find a way around. So you could ask, "This experiment is going to identify which phase, which behavioral phase, are most effective in disrupting performance?" But what we actually found was, when you selectively inhibit activity at different phases, you actually get an enhancement of performance. They get better. And it depended-- there wasn't just one phase, a good phase and a bad phase, but both at the peak and the trough were both effective in enhancing performance, but only when applied at certain behavioral phases. In fact, there was this double dissociation. And that is, that trough stimulation, when applied here-- so when you stimulate in the trough, in the retrieval segment, animals got better. When you stimulate at the peak, in the encoding segment, animals got better. So it wasn't that the peak is good or the trough as good, it's the peak. The peak is good during retrieval and the trough is good during encoding. So what it says is, the peak and the trough had two different functions. And if you actually think about what I was showing before, these sequences, these sequences that are going from just behind to just in front of the animal. What it says is, oh yeah, these different phases, peak and trough, you can think of as like past and future. And so if I'm sitting over here in this encoding segment, what I'm really trying to do is I'm just trying to keep track of where am I right now. Now, I may simultaneously also be thinking about, oh, where am I going to go? But for this task, it's not really helpful. It's not really useful. At this point, I need to focus on where I am right now. So you can think of simultaneously, the actions of these two channels, where am I, where am I going to go. And I can enhance performance by shutting off or inhibiting the non-relevant one. So when I shut off the retrieval channel here in the encoding, I get better. It's like focusing attention. It's like, pay attention to what you're doing now. Stop thinking about stuff. Similarly, when I'm here in the retrieval segment, I'm also encoding. I'm trying to keep track of where I am as well as, think about where I'm going to. The thing is, keeping track of where I am here, it's not relevant for the task. It might be broadly relevant for the animal, but it's not relevant for the task. So when I shut off, I turn off that encoding channel, I'm able to enhance the retrieval information that goes out. You might interpret this saying, "Oh, so this is how we can improve memory by selectively shutting off the hippocampus." Well, this is not a strategy for general cognitive enhancement. Hippocampus is working much better when all these phases are in operation. And that's because the hippocampus is not designed to solve this task. Hippocampus is designed to solve the broader task. It's trying to figure out how is this task relevant to all the other things that I have to do? In other words, you're trying to integrate this information into all the other information you have. And that requires connecting the encoding or retrieval. You really need to have all of those pieces of information available. But it does point out that one can actually dissociate the function of information of these two different phases. The phase actually matters. And it matters at the level of high level behavior, decision-making. It's not just a idiosyncrasy or artifact of-- excitability is a function of phase. This is really how information is being used. Sources of content the respective copyright holders, all rights reserved, excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/fairuse. Sources of content used with permission: Rat brain, hippocampus EEG, theta sequences, bar/raster plot, courtesy of Elsevier, Inc., http://www.sciencedirect.com. Hippocampus electrode, courtesy of Plos. Hippocampus optogenetics, courtesy of eLife. License CC by 4.0. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Tutorial_33_Lorenzo_Rosasco_Machine_Learning_Part_3.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORENZO ROSASCO: If you remember, we did the local method bias-variance. Then we passed to global regularization methods-- least squares, linear least squares, kernel least squares, computations modeling. And that's where we were at. And then we moved on and started to think about more intractable model, and we were starting to think of the problem of variable selection, OK? And the way we posed it is that, yeah, that you are going to consider a linear model. And I use the weights associated to each variable as the strength of the corresponding variable that you can view as measurements. And your game is not only to build good predictional measurements, but also to tell which measurements are interesting, OK? And so here, the term "relevant variable" is going to be related to the productivity, the contribution to the productivity of the corresponding function, OK? So that's how we measure relevance of a variable. So we looked at the funny name. And then we kind of agreed that somewhat it seems that there is a default approach, which is basically based on trying all possible subsets, OK? So this is also called best subset selection. Variable selection is also sometimes called best subset selection. And this gives you a feeling that what you should do is try all possible subsets and check the one which is best with respect to your data, which, again, would be like a trade-off between how well you fit the data and how many variables you have, OK? And what I told you last was you could actually see that this trying all possible subsets is related to a form of regularization, where it looks very similar to the one we saw until a minute ago. The main difference here, I put fw, but fw is just the usual linear function. The only difference is that here, rather than the square norm, we put this functional that is called the 0 norm, which if, as a functional, that given a vector returns the number of entries in the vector which are different from 0, OK? It turns out that if you were to minimize this, you are solving the best subset selection problem. Issue here is that-- another manifestation of the complexity of the problem is the fact that this functional here is non-convex. So there is no polynomial algorithm to actually find a solution. Come to my mind that somebody made the comment during the break. Notice that here I'm a bit passing quickly over a refinement of the question of best subset selection, which is related to, is there a unique subset which is good? Is there more than one? And if there's more than one, which one should I pick, OK? In practice, these questions arise immediately because if you have two measurements that are very correlated, or even more, if they're perfectly correlated, if you just build the measurements, you might build out of two measurements, a third measurement, which is completely just a linear combination of the first two. So at that point, what would you want to do? Do you want to keep the minimum number of variables, the biggest possible number of variables? And you have to decide, because all these variables are, to some extent, completely dependent, OK? So for now, we just keep to the case where we don't really worry about this, OK? We just say, among the good ones, we want to pick one. A harder question would be pick all of them or pick one of them. And if you wanted to pick one of them, you have to tell me which one you want, according to which criterion, OK? So the problem we're concerned with now is one of, OK, now that we know that we might want to do this, how can you do it in an approximate way that will be good enough, and what does it mean, good enough? So the simplest way, again, we can try to think about it together, is kind of a greedy version of the brute force approach. So the brute force approach was start from all single variables, then all couples, then all triplets, and blah, blah, blah, blah, OK? And this doesn't work computationally. So just to wake up, I don't know, how could you twist this approach to make it approximate, but computationally feasible? Let's keep the same spirit. So let's start from few, and then let's try to add more. So the general idea is I pick one. And once I pick one, I pick another one, just keeping the one I already picked. And then I pick another one, and then I pick another one, and then I pick another one. So this, of course, will not be the exhaustive search I've done before. It's probably doable. There is a bunch of different ways you can do it. And you can hopefully-- you can hope that under some condition you might be able to prove that it's not too far away from the brute force approach, at least under some condition. And this is kind of what we would want to do, OK? So we will have a notion of residual. At the first iteration, the residual will be just the output. So just think of the first iteration. You get the output vectors, and you want to explain it. You want to predict it well, OK? So what you do is that you first check the one variable that gives you the best prediction of this guy, and then you compute the prediction. Then what you want to do at the next round, you want to discount what you have already explained. And what you do is that basically you take the actual output minus your prediction, and you find the residual. And then you try to explain that. That's what's left to explain, OK? So now you check for the variables that best explain this remaining bit. Then you add this variable to the one you already have, and you have a new notion of residual, which is what you explained in the first round, what you added in explanation of the second round. And then there's still something left, and you keep on going. If you let this thing go for enough time, you will have the least squares solution. At the end of the day, you will have explained everything. And each round, notice that you might or not decide to put the variables back in, OK? So you might have that at each step you have just one variable, or you might have that you take multiple steps, but you have fewer variables than number of steps. No matter what, the number of steps would be related to the number of variables that are active in your model, OK? Does it makes sense? This is the wordy version, but now we go in details, OK? But this is roughly the speaking. So first round, you try to explain something, then you see what's left to explain. And you keep the variables that will explain the rest, and then you need to write. I'm not sure I used the word, but it's important. The key word here is "sparsity," OK? The fact that I'm assuming my model to depend on just a few vectors-- sorry a few entries. So it's a vector with many zero entries. Sparsity is the key word to explain this property, which is a property of the problem. And so I build algorithm that will try to find sparse solution explaining my data, and this is one way. So let's look at this list. So you define the notion of residual as this thing that you want to try to explain, and the first round will be the output. The second round will be what's left to explain after your prediction. You have a coefficient vector, OK, because we're building a linear function. And then you have an index set, which is the set of variables which are important at that stage. So these are the three objects that you have to initialize. So at the first round, the coefficient vector is going to be 0. The index set is going to be empty. And the residual at the first round is just going to be the output, the output vector. Then you find the best single variable, and then you update the index set. So you add that variable to the index set. To include such variables, you compute the coefficient vector. And then you update the residual, and then you start again, OK? If you want, here I show you the first exam-- just to give you an idea. Suppose that this is-- so first of all, notice this, OK? Oh, it's so boring. Forget about anything that's written here. Just look at this matrix. The output vector is of the same length of the column of the matrix, right? So each column of the matrix will be related to one variable. So what you're going to do is that you're going to try to see which of these best explain my output vector, and then you're going to try to define the residual and keep on going, OK? So in this case, for example, you can ask, which of the two directions, X1 and X2, best explain the vector Y, OK? So this is the case where it's simple. I basically have this one direction. One variable is this one. Another variable is this one. And then I have that vector. I want to know which direction I should pick to best explain my Y. Which one do you think I should pick? AUDIENCE: X1. LORENZO ROSASCO: I should pick X1? OK. This projection here will be the weight I have to put to X1 to get a good prediction. And then what's the residual? Well, I have to take this X1. I have to-- I have to take Yn. I have to subtract that, and this is what I have left. So this is the simple terms, OK? So that's what we want to do. So we said it with hands. We say it with words. Here is, more or less, the pseudocode, OK? It's a bit boring to read. You can see that it's four lines of code anyway. Now that we've said it 15 times, probably it won't be that hard to read because what you see is that you have a notion of residual. You have the coefficient vector, and you have the index set. This is empty. This is all 0's. And this is the first round. It's just the output. Then what you do is that you start. And the free parameter here is T, the number of iterations, OK? It's going to be lambda, so to say. What you do is-- OK, you have for each j-- so just notation. j runs over the variables, and capital Xj will be the column of the data matrix, the one that corresponds to j's variable, OK? And then what you do in this line, here I expand it, is to find the coefficient-- sorry, find the error that corresponds to the best variable, OK? If you look, it turns out that it is-- if you assume-- oh, it is equivalent to find the column best correlated with the output is equivalent to find the column that best explains the output, or the residual. These two things are the same. So here, I write equivalence. So pick the one that you prefer, OK? Either you say I find the column that is best correlated with the output or the residual, or you find the column that best explains the residual in the sense of least squares, OK? These two things are equivalent. Pick the one that you like. And that's the content of this one. And then you select the index of that column. So you solve this problem for each column. It's an easy problem. It's a one-dimensional problem. And then you pick the one column that you like the most, which is the one that gives you the best correlation, a.k.a. least square error. Then you add this k to the index set. And then, in this case, it's very simple. I'm just going to-- I'm not going to recompute anything. So what I do is that suppose that at-- you remember the coefficient vector, where it was all 0's, OK? Then at the first round, I compute one number, the solution with, say, the first coordinate, for example. And then I add a number in that entry, OK? So this is the orthonormal basis, OK? So it has just all 0's, but 1 in the position k. So here I put this number. This is just a typo. And then what you do is that you sum them up, OK? So you have all 0's. Just one number here, the first iteration, then the other one. And then you add this one there, and you keep on going, OK? This is the simplest possible version. And once you have this, now you have this vector. This is a long vector. You multiply this-- sorry, this should be Xn. Maybe we should take note of the typos because I'm never going to remember all of them. And then what you do is that you just discount what you explained so far to the solution. So you already explained some part of the residual. Now you discount this new part, and you define the new residual, and then you go back. This method is-- so greedy approaches is one name, as it often happens in machine learning and statistics and other fields, things get reinvented constantly, a bit because people just come to them from a different perspective, a bit because people just decide studying and reading is not priority sometimes. And so this one algorithm is often called greedy. It's one example of greedy approaches. It's sometimes called matching pursuit. It's very much related to so-called forward stagewise regression. That's how it's called in statistics. And well, it has a bunch of other names. Now, this one version is just the basic version. It's the simplex-- it's the simplest version. This step typically remains. These two steps can be changed slightly, OK? For example, can you think of another way of doing this? Let me just give you a hint. In this case, what you do is that you select a variable. You compute the coefficient. Then you select another variable. You compute the coefficient for the second variable, but you keep the coefficient you already computed for the first variable. They never knew that you took another one because you didn't take it yet. So from this comment, do you see how you could change this method to somewhat fix this aspect? Do you see what I'm saying? I would like to change this one line where I compute the coefficient and this one line even, perhaps, where I compute the residual to account for the fact that this method basically never updated the weights that it computed before. You only add a new one. And this seems potentially not a good idea, because when you have two variables, it's better to compute the solution with both of them. So what could you do? AUDIENCE: [INAUDIBLE] LORENZO ROSASCO: Right. So what you could do is essentially what is called orthogonal matching pursuit. So you would take this set, and now you would solve a least square problem with the variables that are in the index set up to that point, all of them. You recompute everything. And now you have to solve not a one-dimensional problem, but n times k-dimensional problem, where the k is the-- I don't know, k is a bad name-- with the dimension of the set that could be T or more than T, OK? And then at that point, you also want to redefine this, because you're not discounting anymore what you already explained, but each time you're recomputing everything. So you just want to do Yn minus the prediction, OK? So this algorithm is the one that actually has better properties. It works better. You pay the price, because each time you have to recompute the least squares solution. And when you have more than one variable inside, then the problems become big and big. So if you stop after a few iterations, it's great. But if you start to have many iterations each time, you have to solve a linear system. So the complexity is much higher. This one here, as you can imagine, is super fast. So that's it. So it turns out that this method is, as I told you, is called matching pursuit, or if not matching pursuit, forward stagewise regression, is one way to approximate a zero solution. And one can prove exactly in which sense you can approximate it, OK? So I think this is the one that we might give you this afternoon, right? AUDIENCE: Orthogonal. LORENZO ROSASCO: Oh, yeah, the orthogonal version, the nicer version. The other way of doing this is the one that basically says, look, here what you're doing is that you're just counting the number of entries different from 0's. If you now were to replace this with something that does a bit more-- what it does is it not only counts, but it actually sums up the weights. So if you want, in one case, you just check. If a weight is bigger than 0, you count it 1. Otherwise, you count it 0. Here you actually take the absolute value. So instead of summing up binary values, you sum up real numbers, OK? This is what is called the L1 norm. So each weight doesn't count for its sign, but it actually counts for each absolute value. So it turns out that this one term, which you can imagine-- the absolute value looks like this, right, and now you're just summing them up. So that thing is actually convex. So it turns out that you're summing up two convex terms, and the overall functional is convex. And if you want, you can think of this a bit as a relaxation of the zero norm. As we say, relaxation in this sense is the strict requirement. So I talked about relaxation before when I said instead of binary values, it takes real values, and you optimize over the reals instead of the binary values. Here is kind of the same thing. Instead of restricting yourself to this functional, which is binary-valued, now you allow yourself to relax and get real numbers. And what you gain is that this algorithm, the corresponding optimization problem is convex, and you can try to solve it. It is not still something that we can do-- we cannot do what we did before. We cannot just take derivatives and set them equal to 0 because this term is not smooth. The absolute value looks like this, which means that here, around the kink, is not differentiable. But we can still use convex analysis to try to get the solution, and actually the solution doesn't look too complicated. Getting there requires a bit of convex analysis, but there are techniques. And the ones that are trendy these days are called forward-backward splitting or proximal method to solve this problem. And apparently, I'm not even going to show them to you because you don't even see them. But essentially, it's not too complicated. Just to tell you in one word what they do is that they do the gradient descent of the first term, and then at each step of the gradient they threshold. So they take a step of the gradient, get a vector, look inside the vector. If something is smaller than a threshold that depends on lambda, I set it equal to 0. Otherwise, I let it go, OK? So I didn't put it, I don't know why, because it's really a one-line algorithm. It's a bit harder to derive, but it's very simple to check. So let's talk one second about this picture, and then let me tell you about what I'm not telling you. So hiding behind everything I said so far, there is a linear system, right? There is a linear system that is n by p or d or whatever you want to call it, with the number of p of variables. And our game so far has always been, look, we have a linear system that can be not invertible, or even if it is, it might have bad condition number, and I want to try to find a way to stabilize the problem. The first round, we basically replace the inverse with an approximate inverse. That's the classical way of doing it. Here, we're making another assumption. We're basically saying, look, this vector, it does look very long, so that this problem seems ill-posed. But in fact, if only a few entries were different from 0 and if you were able to tell me which one they are, you can go in, delete all these entries, delete all the corresponding columns, and then you will have a matrix. Now he looks short and large. He will look skinny and tall, OK? And that probably would be easier to solve. It would be the case where the problem is one of the linear systema that we know how to solve. So what we described so far is a way to find solution of linear systems that are-- with the number of equations which is smaller than the number of unknowns, and, by definition, cannot be solved, under the extra assumption that, in fact, there are fewer unknowns than what it looks like. It's just that I'm not telling you which one they are, OK? You see, if I could tell you, you would just get back to a very easy problem, where the number of unknowns is much smaller, OK? So this is a mathematical effect, OK? And the odds are open, because-- now they're not because people have been talking about this stuff for 10 years constantly. But one question is, how much does this assumption buy you? For example, could you prove that in certain situations, even if you don't know the entry of this, you could actually solve this problem exactly? So if I give them to you, you can do it, right? But is there a way to try to guess them in some way so that you can do almost as good or with high probability as well as if I tell them in advance? And it turned out that the answer is yes, OK? And the answer is basically that if the number of entries that are different from 0 is small enough and the columns corresponding to those variables are not too correlated, are not too collinear, so they're distinguishable enough that when you perturb the problem a little bit nothing changes, then you can solve the problem exactly, OK? So this, on the one hand, is exactly the kind of theory that tells you why using greedy methods and convex relaxation will give you a good approximation to L0, because that's basically what this story tells you. People have been using this-- and so this is interesting for us-- people have been using this observation in a slightly different context, which is the context where-- you see, for us, Y and X, we don't choose. We get. And whatever they are, they are. And if it's correlate-- if the columns are nice, nice. But if they're not nice, sorry, you have to live with it, OK? But there are settings where you can think of the following. Suppose that you have a signal, and you want to be able to reconstruct it. So the classical Shannon sampling theorem results basically tell you that, I don't know, if you have something which has been limited, you have to sample twice the maximum frequency. But this is kind of worst case because it's assuming that all the bands, all the frequencies, are full. Suppose that now we play-- it's an analogy, OK? But I tell you, oh, look, it's true, the maximum frequency of this. But there's only another frequency, this one. Do you really need to sample that much, or you can do much less? And so it turns out that basically the story here, as you're answering this question, it turns out that, yes, you can do much less. Ideally, what you would like to say, well, instead of being twice the maximum frequency, if I have just four frequencies different from 0, I'd have to do eight samples, OK? That would be ideal, but you would have to know which one they are. You don't, so you pay a price, but it's just logarithmic. So you basically say that you can almost-- you have a new sampling theorem that tells you that you don't need to sample that much. You don't need to sample that low either, which would be, say, the maximum frequency is d. The number of non-zero frequencies is s. So with classical, you would have to say 2d. Ideally, we would like to say 2s. Actually, what you can say is something like 2s log d. So you have a log d price that you pay because you didn't know where they are. But still, it's much less than being linear in the dimension. So essentially, the field of compressed sensing has been built around this observation, and the focus is slightly different. Instead of saying I want to do a statistical estimation where I can just build this, what you do is say I have a signal. And now I view this as a sensing matrix that I design with the property that I know will allow me to do this estimation well. And so you basically assume that you can choose those vectors in certain ways, and then you can prove that you can reconstruct with much fewer samples, OK? And this has been used, for example-- I never remember for, as you call in MEG-- in what? No, MRI, MRI. Two things I didn't tell you about, but it's worth mentioning are, suppose that what I tell you is actually it's not individual entries that are 0, but group of entries that are 0, for example, because each entry is a biological process. So I have genes, but genes are actually involved in biological process. So there is a group of genes that is doing something. I have a group of genes that are doing something, and I want to select is not individual genes, but groups. Can you twist this stuff in such a way that you select groups? Yes. What if the groups are actually overlapping? How do you want to deal with the overlaps? Do you want to keep the overlap? Do you want to cut the overlap? What if you have a tree structure, OK? What do you do with this? So first of all, who gives it information, OK? And then if you have the information, how do you use it, and how are you going to use it? See, this is the whole field of structure sparsity. It's the whole industry of building penalties other than L1 that would allow you to incorporate this kind of prior information. And if you want as the place, as in kernel methods, the kernel was the place where you could incorporate prior information. This is the case where, in this field, you can do that by designing a suitable regularizer. And then a lot of the reason is this. So here we'll translate with these new regularizers. The last bit is that, with a bit of a twist, some of the idea that I show you now that are basically related to vectors and sparsity translate to more general context, in particular that of matrices that have low rank, OK? The classical example is suppose that I give you-- it's matrix completion, OK? I give you a matrix, but I actually delete most of the entries of the matrix. And I tell you, OK, estimate the original matrix. Well, how can I do that, right? Well, it turns out that if the matrix itself had very low rank, so that many of the columns and rows you saw were actually related to each other, you might actually be able to do that. And the way you chose the entries to delete or select was not malicious, then you would be able to fill in the missing entries, OK? And the theory behind this is very similar to the theory that allows to fill in the right entries of the vector, OK? Last bit-- PCA in 15 minutes. So what we've seen so far was a very hard problem of variable selection. It is still a supervised problem, where I give you labels, OK? The last bit I want to show you is PCA, which is the case where I don't give you labels. And what you try to answer is actually-- perhaps it's like the simpler question. Because you don't want to select one of the directions, but you would like to know if there are directions that matter. So you allow yourself, for example, to combine the different directions in your data, OK? So this question is interesting for many, many reasons. One is data visualization, for example. You have stuff that you cannot look at because you have, for example, digits in very high dimensions. You would like to look at them. How do you do it? Well, you like to find directions. The first direction to project everything, the second direction, three direction, because then you can plot and look at them, OK? And this is one visualization of these images here. And I'll remember now the code now. It's written here. You have different colors, and what you see is that this actually did a good job. Because what you expect is that if you do a nice visualization, what you would like to have is that similar numbers or same numbers are in the same regions, and perhaps similar numbers are close, OK? So this is one reason why you want to do this. One reason why you might want to do this is also because you might want to reduce the dimensionality of your data just to compress them or because you might hope that certain dimensions don't matter or are simply noise. And so you just want to get rid of that because this could be good for statistical reasons. OK, so the game is going to be the following. X is the data space, which is going to be RD. And we want to define a map M that sends vectors of length D into vectors of length k. So k is going to be my reduced dimensionality. And what we're going to do is that we're going to build a basic method to do this, which is PCA, and we're going to give a purely geometric view of PCA, OK? And this is going to be done by taking first the case where k is equal to 1 and then iterate to go up. So at the first case, we're going to ask, if I give you vectors which are D dimensional, how can I project them in one dimension with respect to some criterion of optimality, OK? And here what we ask is, we want to project the data in the one dimension that would give me the best possible error. So I think I had it before. Do I have it-- no, no. This was done for another reason, but it's useful now. If you have this vector and you want to project in this direction, and this is a unit vector, what do you do? I want to know how to write this vector here, the projection. What you do is that you take the inner product between Yn and X. You get the number, and that number is the length you want to assign to X1, OK? So suppose that w is the direction. And I have a vector x, and I want to give the projection, OK? What do I do? I take the inner product of x and w, and this is the length I have to assign to the vector w, which is unit norm, OK? So this is the best approximation of xi in the direction of w. Does it make sense? I fix a w, and I want to know how well I can describe x. I project x in that direction, and then I take the difference between x and the projection, OK? And then what I do is that I sum over all points. And then I check, among all possible directions, the one that give me the best error. So suppose that is your data set. Which direction you think is going to give me the best error? AUDIENCE: Keep going. LORENZO ROSASCO: Well, if you go in this direction, you can explain most of the stuff, OK? You can reconstruct it best. So this is going to be the solution. So the question here is really, how do you solve this problem? You could try to minimize with respect to w. But in fact, it's not clear what kind of computation you have. And if we massage this a little bit, it turns out that it is actually exactly an eigenvalue problem. So that's what we want to do next. So conceptually, what we want to do is what I said here and nothing more. I want to find the single individual direction that allows me to reconstruct best, on average, all the training set points. And now what we want to do is just to check what kind of computation these entail and learn a bit more about this, OK? So this notation is just to say that the vector is norm 1 so that I don't have to fumble with the size of the vector. OK, so let's do a couple of computations. This is ideal after lunch. So you just take this square and develop it, OK? And remember that w is unit norm. So when you do w transpose w, you get 1. And then if you-- and if you don't forget to put your square and if you just develop this, you'll see that this is an equality, OK? There is a square missing here. So you have xi square. Then you would have the product of xi and this, which will be w transpose xi square. And then you would also have this square, but this square is w transpose xi and then w transpose w, which is 1. And so what you see is that this would create-- instead of three terms we have two because two cancel out-- not cancel out. They balance each other. OK. So then I'd argue that if instead of minimizing this, because this is equal to this, instead of minimizing this, you can maximize this. Why? Well, because this is just a constant. It doesn't depend on w at all, so I can drop it from my functional. The solution, the minimum, the minimum will be different, but the minimizer, the w that solves the problem, will be the same, OK? And then here is minimizing something with a minus, which is the same as maximizing the same thing without the minus, OK? I don't ask, so far, so good, because I'm scared. So what you see now is that basically if the data were centered, basically this would just be a variance. If the data are centered, so there is a minus 0 here, maybe you can interpret this as measuring the variance in one direction. And so you have another interpretation of PCA, which is the one where instead of picking the single direction with the best possible reconstruction, you're picking the direction where the variance of the data is bigger, OK? And these two points of view are completely equivalent. Essentially, whenever you have a square norm, thinking about maximizing the variance or minimizing the reconstruction are two complementary dual ideas, OK? So that's what you will be doing here. One more bit. What about computation? So this is-- so we can think about reconstruction. You can think about variance, if you like. What about this computation? What kind of computation is this, OK? If we massage it a little bit, we see that is just an eigenvalue problem. So this is how you do it. This actually look-- so it's annoying, but it's very simple. So I wrote all the passages. This is a square, so it's something times itself. This whole thing is symmetric, so you can swap the order of this multiplication. So you get w transpose xi, xi transpose w. But then this is just the sum that was going to involve these terms. So I can let the sum enter, and this is what you get. So you get w transpose 1/n xi xi transpose w. So this is just a number. w transpose xi is just a number. But the moment you look at something that looks like xi, xi transpose, what is that? Well, just look at dimensionality, OK? 1 times d times d times 1 gives you a number, which is 1 by 1. Now you're doing the other way around. So what is this? AUDIENCE: It's a matrix. LORENZO ROSASCO: It's a-- AUDIENCE: Matrix. LORENZO ROSASCO: It's a matrix. And it's a matrix which is d by d, and it's of rank 1, OK? And what you do now is that you sum them all up, and what you have is that this quantity here becomes what is called the quadratic form. It is a matrix C, which just looks like this. And it's squeezed in between two vectors, w transpose and w. So now what you want to do is that you can rewrite this just this way as maximizing the w-- sorry, finding the unit norm vector w that maximizes this quadratic form. And at this point, you can still ask me who cares, because it's just keeping on rewriting the same problem. But it turns out that essentially using Lagrange theorem, it is relatively simple to do, you can check that-- oh, so boring-- that the solution of this problem is the maximum eigenvector of this matrix, OK? So this you can leave as an exercise. Essentially, you take the Lagrangian of this and use a little bit of duality, and you show that the [INAUDIBLE] of this problem is just the maximum eigen-- so the eigenvalues-- ugh, the eigenvector corresponding to the maximum eigenvalue of the matrix C. So finding this direction is just solving an eigenvalue problem. OK. I think just do last few of those slide, kind of cute. It's pretty simple, OK? So the one part, this line after lunch is a bit there for because I'm nice. But really, the only one part which is a bit more complicated is this one here. The rest is really just very simple algebra. So what about k equal 2? I run out of time. But it turns out that what you want to do is basically if you say that you want to look for a second-- so you look at the first direction. You solve it, and you know that it's the first eigenvector-- the first eigenvector. And then let's say that you add the constraint that the second direction you find has to be orthogonal to the first direction. You might not want to do this, OK? But if you do, if you say you add the orthogonality constraint, then what you can check is that you can repeat-- sorry, I didn't do it. It's on my note, the one that I have on the website, and the computation is kind of cute. And what you see is that the solution of this problem looks exactly like the one before, only with this additional constraint, is exactly the eigenvector corresponding to the second largest eigenvalue, OK? And so you can keep on going. And so now this gives you a way to go from k equal to 1 to k bigger than 1, and you can keep on going, OK? So if you're looking for the direction that maximizes the variance or the reconstruction, they turn out to be the biggest eigen-- the vectors-- ugh, the eigenvectors corresponding to the biggest eigenvalues of this matrix C, which what you can call it as a second moment or covariance matrix of the data. OK, so this is more or less the end. This is the basic, basic, basic version of this. You can mix this with pretty much all the other stuff we said today. So one is, how about trying to use kernels to do a nonlinear extension of this? So here we just looked at the linear reconstruction. How about nonlinear reconstruction? So what you would do is that you would first map the data in some way and then try to find some kind of nonlinear dimensionality reduction. You see that what I'm doing here is that I'm just using this linear-- it's just a linear dimensionality reduction, just a linear operator. But what about something nonlinear? What if my data lie on some kind of structure that looked like that-- our beloved machine learning Swiss roll. Well, if you do PCA, well, you're just going to find a plane that cuts that thing somewhere, OK? But if you try to embed the data in some nonlinear way, you could try to resolve this. And this has been much of the research done in the direction of manifold learning. Here there are just a few keywords-- kernel PCA is the easiest version, Laplacian map, eigenmaps, diffusion maps, and so on, OK? I only touch quickly upon random projection. There is a whole literature about those. The idea is, again, that by multiplying the data by random vectors, you can keep the information in the data and might be able to reconstruct them as well as to preserve distances. Also, you can combine ideas from sparsity with ideas from PCA. For example, you can say, what if I want to know not only-- I want to find something like an eigenvector, but I would like the entries of the eigenvector to be 0. So can I add here a constraint which basically says, among all the unit vectors, find the one whose entries are most-- so I want to add an L0 norm or L1 norm. So how can you do that, OK? And this leads to sparse PCA and other structured matrix estimation problems, OK? So this is, again, something I'm not going to tell you about, but that's kind of the beginning. And this, more or less, brings us to the desert island, and I'm done. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_0_Tomaso_Poggio_Introduction_to_Brains_Minds_and_Machines.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. TOMASO POGGIO: This problem of intelligence, it's one of those problems that mankind has been busy with it for the last 2,000 years or so. But 50 years ago or so, that was the start of artificial intelligence. It was a conference in Dartmouth, '62 or so, with people like John McCarthy and Marvin Minsky, who coined the term artificial intelligence. And at that time, progress was made. Progress has been made, especially in the last 20 years. I'll go through it. But they relied, really, only on computer science and common sense. And in the meantime, there are all these other disciplines which have made a lot of progress, and that are very likely to play a key role in the search for answers to the problem of intelligence. So it was obvious that we needed different expertises. Not all in computer science, but in other ones. And so, this was the people that we put together from different labs, from neuroscience, from computer science, from cognitive science, and from a number of institutions in the US. Especially MIT and Harvard. Let me tell you a bit more about the background here. This idea of merging brain research and computer science in the quest to understand intelligence. Part of the reason for this was progress and convergence we saw between different disciplines. And one of them was progress in AI. And this started, really, with Deep Blue, I guess it was called at the time. The machine IBM that managed to beat Kasparov at chess for the world championship. And then, of course, there was Watson beating champions in Jeopardy. And things like drones able to land on aircraft carriers. So that's the most difficult thing for the pilot to do. And in the meantime, things had continued to go pretty fast. This was the cover of Nature, probably eight months ago or so. DeepMind, which is one of our industrial partners in the center, has developed an artificial intelligence called DeepQ I think, that learned to play better than humans, 49 classical Atari games. By itself. And this was two or three months ago, a cover of a Nature supplement, on artificial intelligence and machine learning. This is showing a system by Mobileye, this is an old video, that gives vision to cars. There is a camera looking outside, and is able to brake and accelerate when needed. [AUDIO OUT] There have been, there are, and there will be a lot of significant advances in AI. I think it's a golden age for intelligent applications. You know, if people want to make a lot of money with useful things, that's the time. But this is kind of engineering. Interesting one, but engineering. And we are still very far from understanding how people can answer questions about images. This is one of the main focus in the center, really. How does your brain answer simple questions about this image? About what is there? And what is this? Who is this person? What is she doing? What is she thinking? Please tell me a story about this, what's going on? [INAUDIBLE] And we would like to know to have a system that does that. But also, to know how our brain does it. So that's the science part. It's not enough to pass the Turing Test. In this case, to have a system that does it. We want to have a system that does it in the same way as our brain does it. And we want to compare your model, our system, with measurements on the brain of people, or monkeys, also during the same task. So that's what we call Turing plus, plus questions. And part of the rationale about it is, this is kind of a more philosophical discussion. I personally think that it's very difficult to have a definition of intelligence, in general. There are many different forms of intelligence. What we can ask is questions about, what is human intelligence? Because we can study that. Right. You know, it is, I don't know, the ENIAC computers in the '50, more or less intelligent than a person. You know, it can do things a person cannot do. And so on. There are certain things ants or bees do, are pretty amazing. Is this intelligence? Yeah, in a certain sense is. So I think, in terms of a well-defined question, the real question is about human intelligence. And so that's what, from the scientific part, we are focused on. And would like to be able to answer how people do understand images. We start with vision. We are not limited, eventually, to vision. But in the first five years of the center, that's the main focus. And answer the question about images. And we want to understand how the answers are produced by our brain at the computational, psychophysical, and neural level. It's ambitious. And I think there are probably, in terms of having all these different levels, levels of really understanding from the what, where, the neuroscience, to the behavior. We are not yet at the point in which we can answer all those kind of questions at all these different levels. But some, we are. One example is, who is there? It's essentially face recognition. And this is an interesting problem. Because we know from work, originally in the monkeys, and then with fMRI in humans. Shown here, parts of the brains of cortex, which are involved in face recognition and face perception. And then, it's possible to identify analog regions in the monkey. And record from the different patches in the monkeys, each one probably around 100,000 neurons, maybe 200,000 or so. And look at their properties when the monkeys is looking at the face. And make models of what's going on. And, of course, we want these models to respect the neural data, ideally the MRI data. And do the job of recognizing faces as well as human do. So we are getting there. I'm not saying we have the answers, but we have at least models that can be tested at all these different levels. So that's kind of the ideal situation, from the point of view of what we want to do in the center. Now as I said that, not all problems are mature at this level. There are certain like telling a story. We don't know exactly. We cannot record yet from neurons in the monkey, when the monkey is telling a story. Because the monkey has not been able to tell its story, right. And so there are other questions that are not as advanced as this one. But other type of studies can be done on them, should be done. And this is what we'll hear about. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_11_Nancy_Kanwisher_Human_Cognitive_Neuroscience.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. NANCY KANWISHER: So I'm going to talk today about a couple of things. I had a hell of a time constructing a nice clean narrative arc to everything I wanted to say. And so I finally just decided, the hell with it. I'm just going to be honest. There's several different pieces. They don't make a narrative arc. That's life. I want to address what I see as a sort of macroscopic view of the organization of the human brain is giving us a kind of picture of what I'm going to call the architecture of human intelligence. We're trying to understand intelligence in this class. And so I think the overall organization of the human brain-- in which we've made a lot of progress in the last 20 years-- gives us a kind of macro picture of what the pieces of the system are. So I'll talk about that. And then I'll also-- if I talk fast enough-- do a kind of whirlwind introduction through the basic methods of human cognitive neuroscience using face recognition as an example to illustrate what each of the methods can and cannot do. So that's the agenda. It's going to be pretty basic. So if you've heard me speak before, you've probably heard a lot of this. Anyway, the key question we're trying to address in this course is, how does the brain produce intelligent behavior? And how may we be able to replicate that intelligence in machines? So there's, of course, a million different ways to go at that question. And you can go at it from a kind of computational angle, a coding perspective, from a fine-grained neural circuit perspective. But I'm going to do something that's kind of in between. because those are the things we can approach in human brains. And it's really human intelligence we want to understand. It's a sum of human intelligences. A lot of it are things that we share with animals, but some of it is not. And so I think it's important to be able to approach this not just from the perspectives of animal research, magnificent as those methods are, but to also see what we can learn about human brains. OK. So I'll talk a bit about the overall functional architecture of the human brain. What are the basic pieces of the system? And then I'll get into some different methods and what they tell us about face perception. OK. So at the most general level, we can ask whether human intelligence-- as people have been asking for centuries, actually-- whether human intelligence is the product of a bunch of very special purpose components, each optimized to solve a specific problem, kind of like this device here, where you have a saw for cutting wood, scissors for cutting paper. Saws don't work that well on paper, and scissors don't work that well on wood. Or whether human intelligence is a product of some more generic, all-purpose computational power that makes us generically smart without optimizing us for any particular task. And just to foreshadow the answer, as in all questions in psychology, the answer is both. But we'll do that in some detail. Before we get into that, who cares? And I'd say, first of all, this kind of macro level question about functional components of the human mind and brain matters for a bunch of reasons. First of all, I just think it's one of the most basic questions we can ask about ourselves-- about who we are-- is to ask what the basic pieces are of our minds. Second, more pragmatically, this kind of divide and conquer research strategy has been effective in lots of different fields that are trying to understand a complex system. What do you do with this incredibly complex system, where you just can't even figure out how to get started? Well, one sensible way to get started is first figure out what its pieces are and then maybe try to figure out how each of the pieces work. And then maybe some day, maybe not in my lifetime, figure out how they all work together in some coordinated fashion. And third, somewhat more subtly, of course, we want to know not just what the pieces are, but what the computations that are performed in each of those pieces and what the representations extracted in each piece are. And I think even just a functional characterization of the scope of a particular brain region already gives us some important clues about the kinds of computations that go on there. So if we find that there's a part of the brain that's primarily involved in face recognition, not in reading visually presented words, recognizing scenes, or recognizing objects, that already gives us some clues about the kinds of computations that would be appropriate for that scope of task. So if you tried to write the code to do that, you'd be writing very different code if it only had to do face recognition versus if it also had to be able to recognize words and scenes and objects presented visually. OK. So that's my list of the main reasons. And of course, there are heaps of different ways to investigate this question, and I'll mention some of those in the second half. But I want to start with Spearman, who published a paper in 1904 in the American Journal of Psychology. This article was sandwiched between a discussion of the soul and an article on the psychology of the English sparrow. And in this article, Spearman did the following low tech but fascinating thing. He tested a whole bunch of kids in two different schools on a wide variety of different tasks. And this included scholastic achievement type things. He got exam grades from each student in a bunch of different classes. And he measured a whole bunch of other kinds of psychological abilities, including some very psychophysical perceptual discrimination abilities. How well could people discriminate the loudness of two different tones, the brightness of two different flashes of light, the weight of two different pieces of stuff? And what he found-- well, before I tell you what he found, what would you expect with this? Should we expect a correlation between your ability to discriminate two different loudnesses and, say, your math score in grade five on a math exam? Spearman's main result is that most pairs of tasks were correlated with each other. That is, if you were good at one, you're good at the others-- even tasks that seemingly had very little to do with each other. And this is the basis of the whole idea of g, which is the general factor, which is what led to the whole idea of IQ and IQ testing. And in America, we're very uptight about the idea of IQ. Brits don't seem to have a problem with this idea. They're very enthusiastic about the idea and always have been. But aside from all the social uses and misuses of IQ tests, the point is there's actually a deep discovery about psychology that Spearman made from the fact that all of these tasks were correlated with each other. He didn't know what it was, kind of like Gregor Mendel inferring genes without knowing anything about molecular biology. Spearman just inferred there's something general about the human intellect such that there are these strong correlations across tasks. OK, so that's g. But less well known about Spearman's work is he also talked about the specific factor, s. And s was the fact that although the broad result of his experiments was that most pairs of tasks were correlated, there were some tasks that weren't so strongly correlated with others, and that you could factor those out and discover some mental abilities that weren't just broadly shared across subjects. And I think this kind of foreshadows everything that we see with functional MRI. There's a lot of specific s's, and there's also some g. And you can see those in different brain regions, as I will detail next. Another method was invented by Franz Joseph Gall. And he argued that there are distinct mental faculties that live in different parts of the brain, which I think is more or less right, as I'll argue. But Gall lived in the 1700s, and he didn't have an MRI machine. So he did the best he could, which wasn't so hot. He invented the infamous method of phrenology. He felt the bumps on the skull and tried to relate those to specific abilities of different individuals, and from this, inferred 27 mental faculties. My favorites are and amativeness, filial piety, and veneration. And so there's a kernel of the right idea, but kind of the wrong method. And another method that was a very early one was the method of studying the loss of specific mental abilities after brain damage. And so Flourens, who's often credited as being the first experimental neuroscientist, went around making lesions in pigeons and rabbits and then tested them on various things. And he didn't really find much difference in what parts of the brain he took out for their mental abilities. Maybe that's because he wasn't such a hot experimental-- he didn't have great experimental methods. In any case, he argued that all sensory and volitional faculties exist in the cerebral hemispheres and must be regarded as occupying concurrently the same seat in those structures. In other words, everything is on top of everything else in the brain. So that was a sort of dominant view for a while. People thought Gall was kind of a crackpot, even though he wrote very popular books and went around Europe giving popular lectures that huge numbers of people attended. The respectable intellectual society didn't take him seriously. In fact, the whole idea of localization of function wasn't taken seriously until Paul Broca, a member of the French Academy, stood up in front of the Society of Anthropology in Paris in 1861 and announced that the left frontal lobe is the seat of speech. And this was based on his patient Tan, whose brain is shown here. Tan was named Tan because that was all he could say after damage to his left inferior frontal lobe. And Broca pointed out that Tan had lots of other mental faculties preserved, and it was simply speech that was disrupted. And from this was one of the first respectable people to argue for localization of function. OK. So this research program goes on. And by the end of the 20th century, there's pretty much agreement that basic sensory and motor functions do exhibit localization of function in the brain. There are different regions for basic visual processing, auditory processing, and so forth. And that was no longer controversial. But the whole question of whether higher level mental functions were localized and distinct parts of the brain was controversial then and remains controversial now. And so the method I'll focus on is functional MRI, because I think it's played a huge role in addressing this question at this macroscopic level. And I think you guys know what an MRI machine is. In case anybody has been on Mars for a while, the important part is its measure is a very indirect measure of neural activity by way of a long causal chain. Neurons fire, you incur metabolic cost, and blood flow changes to that region. Blood flow increase more than compensates for oxygen use, producing a local decrease rather than the expected increase in deoxyhemoglobin relative to oxyhemoglobin. Those two are magnetically different. That's what the MRI machine detects. It's very indirect, so it's remarkable it works as well as it does. And it's currently the best noninvasive method we have in humans in terms of spatial resolution, not temporal resolution. OK. So many of you are already diving into the details of some of the data we collected. But in case you're on other projects and are coming from other fields, the basic format of the data in a typical functional MRI study is you have tens of thousands of three dimensional pixels or voxels that you scan. And typically, you sample the whole set once every two seconds or so. You can push it and do it every second or less under special circumstances. You can have more voxels by sampling at higher resolution, but that's a ballpark of the format of the kind of movie you can get of brain activity. OK. So a few things about the method and its limitations, because they're really important in terms of what you can learn from functional MRI and what you can't. So first of all, this is a timeline. My x-axis, even though it's invisible, is time in seconds. And so if you imagine looking at V1 and presenting a brief, say, tenth of a second high contrast flash of a checkerboard, what we know from neurophysiology is that neurons fire within 100 milliseconds of a visual onset. The information gets right up there really fast. The BOLD, which stands for Blood Oxygenation Level Dependent, or functional MRI response, is way lagged behind this. So the neurons are firing way over here in this graph, essentially at time zero-- a tenth of a second. But the MRI response is six seconds later, OK? So it's really slow. And that has a bunch of implications about what we can and cannot learn from it. So first of all, because it's so slow, we can't resolve the steps in a computation for fast systems like vision and hearing and language understanding-- systems for which we have dedicated machinery that's highly efficient where you can recognize the gist of a scene within a quarter of a second of one it flashes on a screen in front of you. And similarly, you understand the meaning of a sentence so rapidly that you've already parsed much of the sentence well before the sentence is over. So these are extremely efficient rapid mental processes. That means the component steps in those mental processes happen over a few tens of milliseconds. And we're way off in temporal resolution with functional MRI. All of those things are squashed together on top of each other. That's a drag. That's just life. We can't see those individual components steps with functional MRI. The second thing is that the spatial resolution is the best that we have in humans noninvasively right now, but it's absolutely awful compared to what you can do in animals. So I missed Jim DiCarlo's talk yesterday, but those methods are spectacular. You can record from individual neurons, record their precise activity with beautiful time information. In contrast, functional MRI is like the dark ages. We have, typically, hundreds of thousands of neurons in each voxel. So the real miracle of functional MRI is that we ever see anything at all rather than just garbage, because you're summing over so many neurons at once. And it's just a lucky fact of the organization of the human brain that you have clusterings of neurons with similar responsal activities and similar functions at such a macro grain that you can see some stuff with functional MRI, although you miss a lot as well. The third important limit of functional MRI that comes out of just a consideration of what the method measures is that you can only really see differences between conditions with functional MRI. The magnitude of the MRI response in a voxel at a time point is meaningless. It might be 563, and that's all it means. Nothing, right? It means nothing. It's just the intensity of the MRI signal. The only way to make it mean something is to compare it to something else-- usually two different tasks or two different stimuli. And so you can go far with that, but it's important to realize you can't train translate it into any kind of absolute measure of neural activity. It's only a relative measure of strength of neural activity between two or more different conditions. OK. And the final deep limitation of functional MRI is that we use this convenient phrase "neural activity." It's very convenient, because it's extremely vague. And fittingly so, because we don't know exactly what kind of neural activity is driving the BOLD response. It could be spikes or action potentials. It could be synaptic activity that doesn't lead to spikes. It could be tonic inhibition. It could be all kinds of different things. Anything that's metabolically expensive is likely to increase the blood flow response. In practice, when people have looked at it, it's very nicely correlated with firing rate-- with some bumps and caveats, so you can never be totally sure. But it's a pretty good proxy for firing rate. You just need to remember in the back of your mind that it could be other stuff too. The final, very important caveat is that functional MRI-- like most other methods where you're just recording neural activity in a variety of different ways-- you're just watching. You're not intervening. And that means you're not measuring the causal role of the things you measure. And that's very important, because it could be that everything you measure is just completely epiphenomenal and has absolutely nothing to do with behavior. So in practice, that's unlikely that you have all this systematic stuff for no reason, but you need to keep in mind that functional MRI affords no window at all into the causal role of different regions. For that, you need to complement it with other methods. So despite all these limitations, I think functional MRI has had a huge impact on the field. And admittedly, I'm biased, but I think it's one of these things where as it happens, we get so used to a result the minute it gets published. It was like, oh, yeah, right. One of these, one of those, so what? But I think it's important to step back, so I made a bunch of pictures to show you why I think this is important. OK. Here is Penfield's functional map of the human brain, published in 1957, a year before I was born. And he has six-- count them, six-- functional regions labeled in there. You probably can't see them. But it's the basic sensory and motor regions, visual cortex, auditory cortex, motor cortex, speech appear in Broca's area, and then my favorite is this word that says interpretive. Nice. OK. Anyway, this was based on electrical recording and stimulation in patients with epilepsy who were undergoing brain surgery. Actually a very powerful method, but that's where it got him. He published this near the end of his career. And that's nice, but it's pretty rudimentary. OK, now, cut to 1990, immediately before the advent of functional MRI. And this is really crude-- the black outlines are the basic sensory and motor regions. And I've added a couple of big colored blobs for regions that had been identified by studying patients with brain damage. So even from Broca and Wernicke, it was known that approximately these regions were involved in language, because people with damage there lost their language abilities. You get whacked in your parietal lobe, you have weird attentional problems, like neglecting the left half of space and stuff like that. If you have damage somewhere to the back end of the right hemisphere, you might lose face recognition ability. These things were known by around 1990, not much else. That's basically the functional map of the brain in 1990. That probably seems like ancient history to a lot of you, but not to me. OK. Here we are today. There's a lot of stuff we've learned, right? There a lot of particular parts of the human brain whose function has been characterized quite precisely. Not in the sense that we know the precise circuits in there or that we can very precisely characterize the representations or computations, but to the sense that we know that a region may be very selectively involved, for example, in thinking about what other people are thinking. A totally remarkable result that Rebecca Saxe who discovered it will tell you about when she's here next week. So that was completely unknown even 15 years ago, let alone back in 1990. And likewise, most of these other regions were either known in the blurriest sense or not with this precision. So I think even though this is very limited, and it's kind of step zero in trying to understand the human brain, I think it's important progress. And I think to push a little farther, I'd like to see this as an admittedly very blurry but still a picture of the architecture of human intelligence. What are the basic pieces? What is it we have in here to work with when we think? We have these basic pieces-- a bunch more that haven't been discovered yet, and a lot more that we need to know about each of these and how they interact and all of that, but a reasonable beginning. So that's my story here for fun. This is me with a bunch of functional regions identified in my brain. And so the argument I'm making here is that the human mind and brain contains a set of highly specialized components, each solving a different specific problem, and that each of these regions is present in essentially every normal person. It's just part of the basic architecture of the human mind and brain. Now, this view is pretty simple. But nonetheless, it's often confused with a whole bunch of other things that people think are the same thing and that aren't, so it's starting to drive me insane. So I'm going to take five minutes and go through the things this does not mean. And I hope this doesn't insult your intelligence, but it's amazing how in the current literature in the field people conflate these things. So I'm talking about functional specificity, which is the question of whether this particular region right here is engaged in pretty selectively in just that particular mental process and not lots of other mental process. That's what I mean by functional specificity. That's a different idea than anatomical specificity. Anatomical specificity would say it is only this region that's involved, and nothing else is involved. That's a different question. How specific is this region versus are there other regions that do something similar? Also an interesting question, but a different one. I'm going to go through this fast. So if any of it doesn't make sense, just raise your hand and I'll explain it more. Yet another idea is the necessity of a brain region for a particular function. That's actually what we really want to know with the functional specificity question-- is not just does it only turn on when you do x, but do you absolutely need it for x? And so that's actually a central question that's closely connected. It's really part of functional specificities. It's the causal question. It's different from the question of sufficiency. Is a given brain region sufficient for a mental process? Well, I think that's just kind of a wrong headed question, because nothing's ever sufficient. It's just kind of a confused idea. What would that mean? That would mean we excise my face area, we put it in a dish, keep all the neurons alive. Let's pretend we can do that. I'm sure Ed Boyden could figure out how to do that in a weekend. And so we have this thing alive in a dish, can it do face recognition? Well, of course not. You got to get the information in there in the right format. And if information doesn't get out and inform the rest of a brain, it doesn't house a face percept, right? So you need things to be connected up, and you need lots of other brain regions to be involved. So let's distinguish whether this brain region is functionally specific for a process from whether it's sufficient for the whole process. Of course it's not sufficient. All right. I know you guys would never say anything so dumb. OK. A question of connectivity-- so people often say, oh, well, this region is part of a network, period. And my reaction is, duh. Of course it's part of a network. Everything's part of a network. In no way does that engage with the question of whether that region is functionally specific. A functionally specific region of course is part of a network. It talks to other brain regions. Those other brain regions may play an important role in its processing, sure. At the very least, they're necessary for getting the information in and out and using it. OK? OK. All right. The final thing that people confuse it with functional specificity is innateness. This is a very different concept. Just because we have some particular part of the brain for which we make it really strong evidence that it's very specifically involved in mental process x, that's cool. That's important. That's completely orthogonal to how it got wired up and whether it's innately specified in the genome that whole circuit-- or whether that circuit is instructed by experience over development, or as in the usual case, very complicated combinations of those two. So just to remind you that functional specificity is a different question from innateness. And one way you can see that very clearly is to consider the case of the visual word form area, about which I'll show you some data in a moment. The visual word form area responds selectively to words and letter strings in an orthography you know, not an orthography you don't know. It's very anatomically stereotyped. Mine is approximately right there, and so is yours in your brain. And it responds to orthographies you know. If you can read Arabic and Hebrew, yours also responds when you look at words in Arabic and Hebrew. If you can't, it doesn't, or it responds a whole lot less. So that's a function of your individual experience, not your ancestor's experience. It has strong functional specificity, and yet, its functional specificity is not innate. So this idea that I'm staking out here has become kind of unpopular. It's very trendy to say, of course we know the brain doesn't have specialized components. So for example, here's from a textbook. Scott Huettel-- unlike the phrenologists who believe this very stupid idea that very complex traits are associated with discrete brain regions, modern researchers recognize that a single brain region may participate in more than one function. Well, he built in the hedge word "may," so we can't really have a fight. But he's trying to stake out this different view . Lisa Feldman Barrett-- I haven't met her, but she's driving me insane, most recently by proclaiming all kinds of things in The New York Times just a few weeks ago. Quote, "in general, the workings of the brain are not one to one, whereby a given region has a distinct psychological purpose." Well, she's got hedge words "in general." We all have hedge words. But basically, what she's reasoning from is the fact that her data suggests that specific emotions don't inhabit specific brain regions from the idea that the whole brain has no localization of function. Well, that's idiotic. It's just idiotic, right? So I hope that people will stop these fast and lose arguments. But here's my favorite-- this old coot Uttal. I know this is going to be on the web, and here I am carrying on as if we are-- anyway, whatever. This guy cracks me up. He's been publishing. Every year, he publishes another book going after functional MRI. Any studies using brain images that report single areas of activation exclusively associated with a particular cognitive process should be a priori considered to be artifacts of the arbitrary threshold set by the investigators and seriously questioned. You go. So anyway, that's fun. Anyway, my point is just that we should engage in the data, right? This isn't like an ideology, where we can just proclaim our opinions. There are data that speak to it. So let me show you some of mine. OK. So what would be evidence of functional specificity? There are lots of ways of doing it. The way I like to do it is something called a functional region of interest method. The problem is that although there are very systematic regularities in the functional organization of the brain, each of these regions that I'm talking about is in approximately the same location in each normal subject. Their actual location varies a bit from subject to subject. So if you do the standard thing of aligning brains and averaging across them, you get a lot of mush, and yet there isn't much mush in each subject individually. And so to deal with that problem-- and to deal with a bunch of other problems-- we use something called a functional region of interest method. And that means if you want to study a given region, you find it in that subject individually. And then once you've found it with a simple contrast-- you want to find a face region, you find a region that responds more when people look at faces than when they look at objects. Now you found it in that subject. It's these 85 voxels right there in that subject. Now we run a new experiment to test more interesting questions about it, and we measure the response in those voxels. OK? That also has the advantage that the data you plot and look at is independent of the way you found those voxels-- a very important problem in a lot of functional neuroimaging, where people have non-independent statistical problems with their data analysis. If you have a functional region of interest that's localized independently of the data you look at in it, you get out of that problem. It's also a huge benefit, because one of the central problems with functional brain imaging, which I think has led to the fact that a large percent of the published neuroimaging findings are probably noise, is that there are just too many degrees of freedom. You have tens of thousands of voxels. You have loads of different places to look and ways to analyze your data. One of the things I love dearly about the functional region of interest method is that you tie your hands in a really good way, right? So you specify in advance exactly where you're going to look, and you specify exactly how you're going to quantify the response. And so you have no degrees of freedom, and that gives you a huge statistical advantage. And it means you're less likely to be inadvertently publishing papers on noise. OK. So that's the functional region of interest method. We've done loads of these experiments. Here's just from a current experiment in my lab being conducted by Zeynep Saygin. She's actually looking at connectivity of different brain regions using a different method I probably won't have time to talk about. It's very cool. But in the process, she's run a whole bunch of functional localizers. And so we can look in her data at the response of the fusiform face area to a whole bunch of different conditions. So these are a bunch of auditory language conditions, so, OK, not too surprising. It doesn't respond very much to those. They're presented auditorily, but these are all visual stimuli here. The two yellow bars are faces. This is line drawings of faces. This is color video clips of faces-- strong responses to both. And all of these other conditions-- line drawings of objects, movies of objects, movies of scenes, scrambled objects, words, scrambled words, bodies-- all produce much lower responses. OK? So I would say this is pretty strong selectivity. It's been tested against lots of alternatives, only a tiny percent of which are shown here. As I mentioned before, it's present in more or less the same place and pretty much every normal subject. I think it's just a basic piece of mental architecture. Now, this is a very simple univariate measure. We're just measuring the very crude thing of the overall magnitude of MRI response in that region to these conditions. There are legitimate counter-arguments to the simple-minded view I'm putting forth, and we should consider them. I think the most important one comes from pattern analysis methods, which I will talk about if I get there. And importantly, these data don't tell us about the causal role of that region. We'll return to those. However, the point is, before we blithely say it's not fashionable to talk about functional specificity, we need counterarguments to data like this. They're pretty strong. And that's just one example, to show you just a few others from Zeynep's paper. OK, so this is what I just showed you, but I'm in the same experiment. We can look at other brain regions. OK. So this is a bottom surface of the brain there, so this is the occipital pole, front of the head, bottom of the temporal lobe. That face area is the region in yellow in this subject. This purple region is that visual word form area that I mentioned, and here is its response magnitude across a whole bunch of subjects, localizing and then independently testing. The purple bars are when subjects are looking at visually presented words. And again, all these other conditions-- faces, objects, bodies, scenes, listening to words, all of those things-- much lower response. In the same experiment, we can also look at a set of regions that respond to speech. I mentioned those very briefly in my introduction a few days ago. These are regions a number of people have found. In this case, they're immediately below or lateral to primary auditory cortex in humans, interestingly situated right between primary auditory cortex and language sensitive regions. Right between is the set of regions that respond to the sounds of speech-- not to the content of language, but the sounds of speech. And so this is when people are saying stuff like, "ba da ga ba da ga." So they're just lying in the scanner, saying, "ba da ga ba da ga." And here's when they're tapping their fingers in a systematic order. Here's when they're listening to sentences. Importantly, this is when they're listening to jabberwocky gobbledygook that's meaningless. So no meaning, but phonemes-- same response. That's what tells us that this region is involved in processing the sounds of speech, not the content of language, and load everything else. So other things-- moving outside of perceptual regions, you might say, OK, fine. Perception is an inherently modular process. There's different kinds of perceptual problems, that make sense. But high level cognition-- we wouldn't really have functional specificity for that. But oh, yes, we do. Here are some language regions. There's a bunch of them in the temporal and frontal lobe that have been known since Wernicke and Broca. But now, with functional MRI, we can identify them in individual subjects and go back and repeatedly query them and say, are they involved in all of these other mental processes? So this is now the response in a language region-- so identified, here's the response when you're listening to sentences. This is when you're listening to jabberwocky nonsense strings. Here's when you're saying "ba da ga ba da ga." It's not just speech sounds. Here's when you're listening to synthetically decomposed speech sounds that are acoustically very similar to the jabberwocky speech. It's just not interested in those things. It seems to be interested in something more like the meaning of a sentence. And just to show you some other data we have on this, this is data from Ev Fedorenko, who has tested this region. Now, this is sort of roughly Broca's area, the main mental functions that people have argued overlap in the brain with language. Namely-- sorry, this is probably hard to see here, but arithmetic, so we have difficult and easy mental arithmetic. Intact and scrambled music in pink. A bunch of working memory tasks-- spatial working memory and verbal working memory-- and a bunch of cognitive control tasks-- just kind of an attention demanding task where you have to switch between tasks and stuff like that. And here is the response profile in that region. Reading sentences, reading non-word strings. All of those other tasks, both the difficult and the easy version-- no response at all. That's extreme functional specificity, right? It's not that we've tested everything, there's more to be done. But the first pass querying of do those language regions engage in all of these other things that people thought might overlap with language? The answer is no, they don't. And I think that's really deep and interesting, because it means that this basic question that we all start asking ourselves when we're young is, what is the relationship between language and thought? I know Liz disagrees with me somewhat on this. That's because she's very articulate, and she doesn't feel the difference between an idea and its articulation. I'm less articulate. It's very obvious to me they're different things. No, it's not the only reason. She has data, too, and it'd be fun to discuss. But I think there's a vast gulf between the two in that many different aspects of cognition can proceed just fine without language regions. And actually, the stronger evidence for that comes not from these functional MRI data, striking as I think they are, but from patient data. So Rosemary Varley in England has been testing patients with global aphasia. This is this very tragic, horrible thing that happens in patients who have massive left hemisphere strokes that pretty much take out essentially all of their language abilities. Those people she has shown are intact in their navigation abilities, their arithmetic abilities, their ability to solve logic problems, their ability to think about what other people are thinking, their ability to appreciate music, and so on and so forth. So I think there's really a very big difference between a major part of the system that you need to understand the meaning of a sentence and all of those other aspects of thought. This is just showing you what I mean by functional specificity-- what the basic first order evidence is. And these are just the regions that we happen to have in this study so I could make a new slide. But for lots of other perceptual and cognitive functions, people have found quite specific brain regions for perceiving bodies and scenes, of course, motion. The area MT has been studied for a long time-- regions that are quite specifically involved in processing shape. We've been studying color processing regions recently. They're not as selective for color as some of these other regions, but they're very anatomically consistent. And things I mentioned before in my brief introduction-- regions that are specifically involved in processing pitch information and music information, and as you'll hear next week from Rebecca Saxe, theory of mind or thinking about other people's thoughts. And so there's quite a litany of mental functions that have brain regions that are quite specifically engaged to that mental function. And each of these-- to varying degrees, but to some appreciable degree-- have corroborating evidence from patients who have that specific deficit. So that shows that each of these is likely to be not only activated during, but causally involved in its mental function. And as I mentioned, there are actually good counter-arguments to some of the things I've been making that are worth discussing. I think the pattern analysis data is the strongest. Oh, and I do need to take a few more minutes. Just like five or something? OK. So all of that's to say, so here's roughly where we are now. There are counter-arguments, but loose talk about, oh, there's no localization of function in the brain. You got to engage with us at first and give me a serious counter-argument. OK. Finally, I want to say that it's not that the whole brain is like this, right? There are big gray patches where we haven't figured out what it's doing, but there are also substantial patches that have already been shown to be, in some sense, the opposite of this. Regions that are engaged in almost any difficult task you do at all. And I think this is a very interesting part of the whole story of the architecture of intelligence, so I'm going to take five minutes and tell you about it. This work is primarily the work of John Duncan in England. And he's been pointing out for about 15 years that there are regions in the parietal and frontal lobe shown here that are engaged in pretty much any difficult task you do. Any time you increase the difficulty of a task-- whether it's perceptual or high level cognitive-- those regions turn on differentially. And so that's why he calls them multiple demand. They respond to multiple different kinds of demand. Duncan argues that these regions are related to fluid intelligence. So remember Spearman, who I started with, who talks about the general factor, g. Well, Duncan thinks that basically, this is the seat of g-- these regions here-- to oversimplify his argument. There's multiple sources of evidence for that. And one is, well, they're strongly activated when you do classic g-loading tasks. That's not that surprising. They're activated in all different kinds of tasks. More interestingly, he did a large patient study, where they found 80 or so neuropsych patients in their patient database. And they identified the locus of the brain damage in each of those patients. And what they did was they measured post-injury IQ. They estimated from a variety of sources pre-injury IQ. And they asked, how much does your IQ go down after brain damage as a function of one, the volume of tissue you lost in the brain damage, and two, the locus of tissue? And basically, what he finds is if you lose tissue in these regions, your IQ goes down. If you lose tissue elsewhere, you may become paralyzed or aphasic or prosopagnosia. Your IQ does not go down. In fact, he made a kind of ghoulish calculation that you lose 6 and 1/2 IQ points for every 10 cubic centimeters of this region of cortex, and almost nothing for the rest of the brain. So this is kind of crude. It's very imperfect what you can get from patient study, but I think it's intriguing. And so his suggestion is that in addition to these highly specialized cortical regions that we use for these particular important tasks, we also have this kind of general purpose machinery that makes us generically smart. And I'm going to skip around. We've tested this more seriously. He did group analyses, which I don't like. We did it in collaboration with him with individual subject analyses, the most precise measurements we could make, and boy, is he right. I mean, even to the voxel you can find that these regions are engaged in seven or eight very, very different kinds of cognitive demand-- all activate the same voxel differentially. So the basic story I'm putting forth here-- without the second half of my talk, I'm sorry about that-- is that at a macro scale, the architecture of human intelligence is that we have these special purpose bits for a smallish number of important mental functions, not all of them innate-- maybe some of them. In addition, we have some general purpose machinery. There's loads more that we don't know from the precise computations that go on in these things, to their connectivity, to the actual precise representations that you can see with the neural code if you could measure it, which we can't in humans, to the timing of these complex interactions, which of them are uniquely human, which of them are also present in monkeys. And I don't have time to go find the slide, but one of the things we've been doing recently is looking in the ventral visual pathway at the organization of face, place, and color selectivity. And what we see is that-- we is me and Bevil Conway and Rosa Lafer-Sousa. Bevil and Rosa had previously shown that on the lateral surface in the monkey, you have three bands of selectivity. So it goes face selectivity, color selectivity, place selectivity, and three bands on the side of the temporal lobe in monkeys. We find this in humans. You have exactly the same organization in the same order, but it's rolled around on the ventral surface of the brain in the same order-- face, color, place-- on the bottom of the brain. So we think that whole broad region is homologous between monkeys and humans. It just rolled around on the bottom. Maybe it got pushed over [AUDIO OUT] something. And that's not exactly a novel argument. Actually, Winrich wrote a paper suggesting this a while back, and I think we're starting to see those homologies. And the reason that's important is that it means that all these questions we desperately want to answer about the causal role, connectivity, population codes, [AUDIO OUT] interactions between regions, development-- all of that that we can't answer very well in humans, Winrich can answer in monkeys. And after a break, he will tell you about all of that. [APPLAUSE] |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_33_Alia_Martin_Developing_an_Understanding_of_Communication.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ALIA MARTIN: I'm going to be talking today about how infants and kids start to develop an understanding of communication and why this is an important topic for understanding human intelligence. So what is communication? So really, basically, communication is a transfer or exchange of information. And importantly, in human communication, this isn't just any kind of information, but it's specifically the kind of information that's in our heads and in our minds. So for example, there's information in my head right now that I'm going to be transferring to you over the course of this talk. And the reason we need communication to do this and that we use it all the time is obviously that everyone has minds of their own and that we don't have access to the mental states of others. And we can't automatically transfer our own mental states to theirs without some means of making those mental states observable in the form of communicating with them. So people are both cognitive beings-- we have to sort of navigate the thoughts in our heads and try to figure out the thoughts that are in other's heads-- but we're also social beings. We really benefit from gaining access to the thoughts in the minds of others and giving them access to our own thoughts. So for example, for cooperation, competition, learning, and teaching, and all the kinds of social interactions we do with each other, we have to be able to share the contents of our mental states. And it's this sort of joint, both cognitive and social nature of communication that I think makes it especially important for understanding the development of human intelligence. It sort of crosscuts a lot of these core knowledge domains that Liz talked about in her talk. So human communication requires reasoning about others' cognitive states, so understanding what others' beliefs, and desires, and intentions are. It also requires reasoning about social interaction, so understanding that typically these kinds of thoughts, and beliefs, and intentions are not shared among different people. Everyone has their own that are unobservable and housed in their own heads. But in the context of a particular kind of social interaction, these people can intentionally share their thoughts with each other. And communication often involves reasoning about a third thing, which is language or communicative signals outside of language as well. And so I'm going to back to this point at the end, but all these types of reasoning are going to come up in the studies about infant communication that I describe in the talk. And so Liz talked about how infants start out with these coherent, separate, principle, and core knowledge systems that contain some limited representations for reasoning about the world, and which kind of come together around the end of the first year. So if we want to figure out how to build a model of how social cognitive development works, we're going to need to understand how an infant starts to figure out, using the core knowledge that they have early in life, the components of communication and also how these components come together so that infants can build a more complex causal model of communication like adults have. So this is just to illustrate the points I'm making about the communicative situation. So in any given communicative situation, we can't just think about the words that are being said or what we're hearing. We need to think about it in this broader context of having a communicator or maybe multiple communicators, an addressee or an audience, and also the things that the communicator is saying in the broader context that allow us to figure out what's going on in this social interaction. So it really requires understanding that the communicator's mental states are being transferred in this causal way to the mind of the addressee. So I'm going to structure the talk in terms of some important insights from the philosophy of human language and communication that I think have really guided the way researchers in the last, say, 50 years have been thinking about the development of human communication as well. So I'm going to broadly illustrate the three points that I'm taking from each, and then I'll expand on them in each section. So the first insight that I think is really important for understanding how we think about communication comes from John Austin, who brought up the important point. So before Austin, people tended to think about language or study language a lot in terms of language itself, and the semantics, and syntax, and just the language signal. But Austin pointed out that language is actually not just about the content, but it's also an action. It's actually something that we use, and we do things with it, in the same way that we do things and accomplish things in the world using other kinds of actions that we engage in. So Austin sort of brought up this important distinction that language is not just about the content. So to illustrate with an example, in this communicative interaction, there's language here. This person is saying, is there any salt? And Austin analyzed this in terms of not only what the words meant, but in terms of three layers of intentionality. So in this simple interaction, he pointed out that there's a locutionary act, or the meaning of the sentence, which is that there's a question being asked about the presence of salt. But that there's actually something else going on here as well, which is that there's an illocutionary action, which is what he considers to be the intention underlying the action, which is to request the salt. So if you actually just read this sentence, it's kind of under-determined. It's not necessarily obvious that this person's asking the other person at this table to pass the salt. But if you take it within the broader context of the interaction, you can understand that what's really going on here is not just a sentence being produced, but rather one individual requesting something from another, who is then supposed to understand the request, and in the third layer of analysis, cause the addressee to provide the salt. So the idea is that there are these multiple things going on in the context of a communicative interaction involving language that go beyond the words that are actually being spoken here. And so if you're an infant, just like in philosophy of language, the focus in infant cognitive development-- also for a long time, and still is, we still need to understand these things-- was on understanding how infants learn words, or how infants figure out the meaning of language, and how it relates to objects in the environment, and later, to the abstract concepts that language indicates. But importantly, understanding communication for an infant is going to be more than about just figuring out the meaning of these words. And so following the philosophical tradition of people like Austin, who introduced the study of pragmatics to language, developmental psychologists as well started thinking about the importance of recognizing communication as this broader action in its context for understanding how infants come to be good communicators themselves. So if communication involves this whole context of action and interaction between people, if you're a baby who's in the business of understanding the social world, the agent world, and the world of language, you're going to be well-served by developing an ability to identify these kinds of communicative actions or situations and their components, and to begin to understand how they work. So how do infants start to figure out when communication is going on in the world around them, rather than just identifying words and associating them with objects? So actually, there's a lot of evidence that infants are identifying key features of communicative situations really early in life. So from the time they're born, newborn human infants prefer listening to speech over other sounds. So the typical method that's used to measure the preference of an infant who's only one to four days old is to have them suck on a pacifier that's connected to a machine that detects how strongly infants are sucking. And then the sucking is used as a measure of infants arousal upon hearing a particular sound or being exposed to a particular stimulus. And so what researchers found is that if you have infants suck on this pacifier while listening to speech sounds and nonspeech sounds in alternation, nonspeech sounds being sine wave-produced sounds that are very similar to speech in their features. If you listen to it, it has the prosodic contours of speech and sounds a lot like it, but it's not actually speech. You can't actually parse any words from it. Infants showed a preference for listening to speech over nonspeech, or more arousal when they listened to speech. So they maintained their arousal for speech over the course of the experiment, but for nonspeech, it declined. And so extremely early in life, infants seem to have this bias for paying attention to the primary communicative signal of our species, that is, human speech. Quite early in life as well, infants recognize some of the important features of the social context surrounding speech, so getting more into the domain of the important features for communication. So by six months, infants seem to recognize that speech is human-produced, or at least associate speech with other humans, and also human-directed. So in one study, infants saw pictures of human faces or faces of monkeys, and they heard different sounds. So they either heard-- oh, it seems like the sound is not on, but that's OK. So they either-- oops. So they either heard a speech sound in a language they never heard before repeating itself, or they heard a monkey call. And infants in one trial either saw-- so in the first trial, say, they'd see the human face, and then they could listen repeatedly to a speech sound until they looked away for two seconds. And then they would see a monkey face, and then they would hear either a speech sound or a monkey sound and be able to look at this image while listening to that sound until they looked away for two seconds. And so they got all possible combinations. Sometimes they saw a human face and listened to speech. Sometimes they saw a human face and listened to the monkey sounds. And sometimes they saw the monkey face and listened to either human speech or the monkey sound. And the question was whether infants actually could match speech sounds to humans and recognize that they should be produced by the human rather than the monkey. And that's what they found. So you can see that when infants heard speech, they were much more likely to look at the-- yeah, they were looking at the human when they heard speech, and then they were looking at the monkey when they heard the monkey calls. Similarly, infants seem to understand by six months that speech is directed at other humans. So when they saw a person talking behind a barrier versus acting, so swiping with her hand behind a barrier, and then the barrier was revealed to reveal either a person or an object, infants looked longer for the speaking familiarization when they saw that there had been an object behind the barrier than a person, suggesting that they expected speech to be directed toward another person. And in contrast, when they saw the person swiping, they looked much longer when they saw a person behind the barrier because typically, we don't swipe at people. We tend to speak to them. So this suggests that infants understand some of the features of the context in which human communication is produced. Importantly, infants do still seem to be developing this ability over the first year of life because it's not until 10 months that they expect mutual gaze between speakers, which tends to be an important part of understanding the social context of communication. So in this study, infants saw two people facing each other. This is just one of the experiments from the study. But you can see that they saw the two people looking at each other and speaking to each other, or the two people looking apart and speaking to each other. Infants at nine months didn't differentiate between these at all. So at nine months, they didn't necessarily think that the people were going to look at each other when they were speaking. But at 10 months, you can see here that infants looked longer for the averted gaze than for the mutual gaze. So infants seem to be able to recognize when a communicative context is going on early in life. But we don't necessarily know from these studies whether they really understand that communication is happening or whether they're just detecting some important features of communicative interactions that might help them to glom onto communication so that they can eventually develop an ability to figure out what's going on within the communicative interaction themselves. So, so far in the studies that I've mentioned, there's nothing cognitive here. There's nothing really about infants having to understand the intentions or the mental states of the speaker going into the mind of the addressee, like I was talking about before. So a question I've explored in my research is whether infants recognize that when these features are in place, when you have a communicative signal, which is something they recognize, when you have a communicator and addressee who are socially engaged with each other, do infants recognize that communication can actually lead to the transfer of information? So do infants understand that speech, using speech, because this is something that infants seem to recognize as a signal that's important in communicative context, can transfer information between individuals? So we can use speech to communicate with each other about what we're interested in. So if I'm observing this interaction between a communicator and an addressee, I can infer that if the communicator says, the cup, and there's only one cup present, the addressee's probably going to be able to figure out what it is that the communicator wants. And I can figure this out even from a third party perspective. And I can also understand that other kinds of sounds, like perhaps a positive emotional vocalization, are not going to be as effective in communicating to the addressee what the communicator is interested in. So speech and also other communicative signals as well that can specify this sort of information, perhaps like pointing or particular kinds of gestures, can transfer information from one individual to another. So speech is going to be more effective for communicating than other kinds of noncommunicative actions. And importantly, for the purposes of this study, you're not just going to know that communicative information transfer is going on because you know what the word cup means, even in a situation where you're listening to a foreign language, and you have no idea what the meaning of the words are. Even in this situation, you're going to understand that some kind of communicative information transfer can happen. So we've all had the experience of being in a foreign country, presumably, where you're listening to people speak to each other. You can't necessarily understand what they're saying, but you know that information transfer is happening. So the interesting thing about communicative actions like speech is that when we witness them, we can know that people are communicatively sharing information, even when we don't know what information is being communicated ourselves. And this insight suggests that maybe, if you're an infant, one really good way to jump into the communicative interactions around you and start to understand what's going on is to be able to identify when others are communicating. And hearing the sounds of speech exchanged between two people in the context of a social interaction might be a really good way to do this. So an ability to figure out when communication's going on like this might provide infants with this really powerful way of being able to track the information flow between two other people's minds and also to figure out, based on their responses to each other, what the actual content of the words might mean. So we asked whether infants recognize that speech is communicative at 12 months of age. And so in the procedure of this study, infants were privy to a third party interaction between a communicator and an addressee. And the infant is always just observing the interaction. This is so that we can isolate whether infants think that certain communicative vocalizations can transfer information from the communicator to an addressee, even when the infant themselves has all the information. So the third party nature of this procedure is important here for seeing whether infants really are thinking about the fact that thoughts are typically isolated in particular people's minds, but with communication, they can be shared. So infants in these studies were given a violation of expectation paradigm. In this kind of experiment, as many of you probably know, infants are shown a story through a series of scenes. And then in the last scene or series of scenes, they're shown different kinds of endings. And we're interested in whether infants are surprised or look longer at some endings more than they do at others. So infants in these studies were shown a live display where they had an actor in front of them in a stage. First, they were familiarized with this actor showing a preference for an object by picking it up and playing with it repeatedly in three separate scenes. And so, as in the studies by Amanda Woodward that Liz already mentioned, infants, by 12 months and as early as three months, will attribute a goal to this person for reaching for that particular target object. So after being familiarized to this, infants saw an addressee, who was a new person, present in a totally different part of the stage. They had never seen this person before, and they'd never seen the two people together before. This person reached for both objects in turn. So first, she grabbed this one, then this one, and then this one, and then this one again to show that she didn't have a preference between the objects and could reach both of them. Then in the test scene, so this is the ending that we showed infants, they saw the two people together for the first time, but now the communicator couldn't reach the objects because only her face had access to the stage. But the addressee could still reach them just fine. At this point, the crucial manipulation was the vocalization uttered by the communicator. So she either produced a speech sound, the nonsense word koba, which infants had not heard before, she produced a coughing sound, so something that would typically be physiological and not intentional or communicative in any way, or an emotional vocalization, so a sound that perhaps could convey information. If I say, ooh, you might think I'm interested in something, I'm feeling positive, but you won't necessarily know what it is. So this one's sort of in the middle of the other two. And then, infants saw the addressee either provide the target object that the communicator had reached for before or the nontarget object. And the question is, do infants understand that speech as a communicative signal, do they have some expectation that even though they've never heard this particular speech sound used before, that speech can transfer information such that the addressee will now be able to select the correct object? Whereas nonspeech vocalizations like coughing and emotional vocalizations can't. So if so, infants in the speech condition should look longer to nontarget than target actions, responses, finding these unexpected. And infants in the other two conditions shouldn't differentiate between the two. And this is what we found. So in the speech condition, infants are looking longer to nontarget than target, suggesting that they understand that speech can transfer information about the communicator's goal. But when the communicator coughs or produces an emotional vocalization, infants don't show these same expectations. So this is some initial evidence that by 12 months, infants not only recognize the context in which communication occurs and attend to speech as a special signal for communication, but they also understand that speech is something that's able to transfer information between two people. So we also ran a few control conditions to rule out alternative explanations for this. So one important question is, are infants really reasoning about the addressee's access to information in this situation? So for example, in the case where the-- if infants are really reasoning with the addressee's access to information, then in a case where the communicator makes a positive emotional vocalization, and the addressee has previous information about what she's interested in, so for example, if she just saw that the communicator was trying to feed her child or wanted a drink of water, she might know now that a positive emotional vocalization is indicating something like the cup. So we set up a scenario like this as well. In this case, it's exactly the same as the previous study, except that the addressee had visual access to the communicators preference during the familiarization phase. So now she knows what it is that the communicator likes. And this is important for making sure the infants recognize that something like visual access can provide the addressee with information about the communicator's interest. So now infants saw the same addressee familiarization. And then in the test, they saw the communicator produce the vocalization ooh once again. So now even though ooh was not treated as a communicative vocalization in the previous studies, if infants are reasoning about the kinds of information that the addressee has access to, they should now expect that the addressee will provide the target object because she knows from a previous scene what the communicator was interested in. So this is important also for ruling out the possibility that maybe just hearing these noncommunicative vocalizations surprises or confuses infants or makes them unable to reason about the scenario anymore. Here, they're getting a noncommunicative vocalization, but the addressee has access to information. And here, as in the speech condition, infants looked longer at the nontarget outcome. So this is just some evidence that infants are really thinking about the idea that relevant sources of information, such as a communicative vocalization, but also prior visual access, can give the addressee information about the communicator's goal. So they are reasoning about information access. Another important question is, are they really thinking about the source of information, or do they have-- so what's really the mechanism behind what infants are doing here? Are they thinking something like, when I hear speech, everything will go well, or the speaker is going to get what she wants? Or are they really thinking about communication in this causal way, where the information has to come from the speaker and be delivered to the listener in order for it to work? So to test for this, infants saw another communicator familiarization. Here, the communicator is alone, so they've never seen the two people together before. She reaches for the target object. Then they see the addressee again. And then in the test scene, in this case, the addressee is the one who produces the speech. So speech is present in the scene as it was before, but it's not being produced by the communicator or the person who showed a preference. So if infants just think that speech leads to people obtaining their goals or magical outcomes happening, then they should expect the addressee to provide the target here as well. But if they understand that they don't know anything about the addressee, in which case, this is not really informative about either object, then they should look equally to both outcomes. And in fact, this is what they do at 12 months. So infants here don't expect information to be transferred from one person to another, unless the first person who communicated is the one who actually had the information to provide. So by 12 months, infants seem to recognize that speech is communicative, and they seem to have some of the parts of a causal model of communication. They're not just thinking about speech as something that can produce successful outcomes, but as something that can be used to have information move from the mind of one individual to another. So a 12-month-old seemed to recognize that speech is communicative. But we really wanted to get at the idea that understanding that speech is communicative might be something that drives and guides word learning and language acquisition, rather than something that is as a result. So what a 12-month-old could be doing in this study is hearing the word koba, associating it with the object that the communicator had reached for, and then thinking, OK, that one's the koba, so that's what the addressee should reach for. But in these studies, we were interested in whether infants at an even younger age, who would be very, very unlikely to be associating the word with the object or learning a label for the object over the course of the study, would also recognize that speech can transfer information about one person to another person. And so for this reason, we tested 6-month-olds, because 6-month-olds understand only some very, very common words in their environment and will look to the right object when they hear the label for it. But these are very, very limited words, and there's no evidence for learning a word in a single trial, which is what they would have to do in this study. They would have to hear koba, and then think back to what the communicator had reached for, and really learn that word over the course of the study, which we have no evidence that a 6-month-old can do. So the goal of looking at this age group was to see whether infants might have a more abstract understanding of the idea that when speech is produced, information can be transferred from one person to another, even when they themselves don't know what that information-- or what the link between the word and the object. So a 6-month-old saw the same scenes. We gave them the speech versus cough contrast. And they look the same as the 12-month-olds here. So you can see that in the speech case, they're looking longer for the nontarget outcome. And in the cough case, they're looking longer for the target outcome. And so we haven't done all of the same control conditions as with the 12-month-olds here. So I think there's a lot of room for questions about how a 6-month-old's understanding of communication in this causal way more limited than the understanding of 12-month-olds. Would a six-month-old think that if speech was produced by a loudspeaker or by the addressee, that the communicator would still get the right object? So there are a lot of open questions about-- these basic questions about how infants' understanding of communicative information transfer starts out. So these experiments suggest that by six months, infants seem to understand that speech is communicative, in addition to the other studies that suggest they understand some of the features of communicative interactions. They recognize that it transfers information from one person to another. And I'd like to argue that this might be something that provides a mechanism for language and knowledge acquisition, so sort of guides infants to the kinds of relevant interactions where they might want to learn things about people, and their mental states, and the words that they're using, rather than first learning those words through association and then later coming to this more abstract understanding. So the idea is that infants might start out with this understanding of what communication is and when it's happening, and then can sort of fill in the rest from there, and that this might be one of the earlier blocks of that. So I'll try to go fairly quickly through the rest. So the second insight I wanted to bring up that I think is especially important is that communication requires this focus on intentions. And this was really the work of Grice that highlighted this, and in particular, a special type of communicative intention. So going back to the example from the beginning. When someone asks, is there any salt, Austin proposed that there are these three levels on which we can think about this communicative action. And Grice was the one who really formalized this by talking about this special kind of intention that we see in the domain of communication, human communication in particular, which is the idea of speaker meaning. And so his idea is that there's this double-layered intention that comes when we're communicating with each other. So we have a communicator who intends the addressee to respond in a particular way. So here, the communicator wants the addressee to provide some salt. The communicator intends for the addressee to recognize that the communicator intends to have that response. So this person not only wants this person to pass the salt, but wants her to recognize that that is what he's asking for, that that is the request that he's making. And importantly, this third point, which is that the communicator intends the addressee to respond that way on the basis of the recognition of that intention. And so to make this a little more concrete, in this case, if he asks her for the salt, and she passes it because she heard him and understands that's what he's asking for, that's great. But the argument is that communication wouldn't really be happening if, for example, she was wearing headphones and listening to music, and she hadn't really been listening to him, and she just happened to pick up the salt and pass it to him. So communication, I mean, it would look like a communicative interaction. The right kind of outcome is still happening in response to what he said. But if she has no access to the actual message, if she doesn't produce the response by virtue of understanding what the communicator is trying to say, then communication hasn't really occurred. It's just sort of a lucky accident. So I'll skip this part. So this is this important kind of intention that we see in human communication, in which I'll later briefly mention, we don't really see in animal communication in the same sort of way. So how do infants start to get this idea about communication, that it's not just about identifying communicative interactions, but also about understanding these particular kinds of intentions? So in the '90s, there was this shift in developmental psychology to looking at word learning, not only in terms of infants' associations of words in the environment, but to also, with the work of Dare Baldwin, to thinking about other people's intentions when they're using words. And so she had these really elegant studies where-- with this funny image-- where she pointed out that if infants really were learning words through association, then it wouldn't be very efficient for them because they would make a lot of mistakes. So for example, in this kind of situation, here's a dad and his baby. The baby is looking at this lizard. The dad is looking at this rooster. And the dad says, what a cheeky rooster. So if you as the infant are only learning words on the basis of associating what you hear with what you see, you're going to learn that the word rooster refers to this thing rather than to this thing. And you're going to get it wrong. So Baldwin did a whole host of studies in the second year of life where she set up situations where infants were looking at particular objects like here. And then she had their parents or an experimenter look at a different object and label that one to see what infants would do. And what she found is that infants actually would consult their parent or consult the other person for cues to reference, to what they're intending to label. So if, for example, in this situation, the infant, when hearing this, instead of just assuming the word refers to what they're looking at, would actually look up to dad to see what he's looking at and then follow his line of gaze to understand that this is the object that he's actually talking about. AUDIENCE: At this age? At this age? ALIA MARTIN: Yeah, at this age. Yeah. So the idea is that infants in the second year of life, at least, are understanding that understanding what someone's referring to involves consulting cues to their attention and intentions rather than just the infant's own. And I think I'll skip this one. There are other studies showing this, too. So in this one, there's an experimenter who puts these two objects in a box. The objects switch while she's absent. And then she looks in one box and says-- the boxes are closed, and she says, there's a sefo in here. So if the infant understands that they have to pay attention to what the person knows about or what the person thinks rather than their own knowledge to figure out what she's labeling, then now when she says, can you get the sefo, the infant should actually pull the object out of here, assuming that this is the one she's labeling, because that's the one that she was intending to label. And in fact, that's what infants do at 17 months. In the case where the experimenter had a false belief about the location of the object, infants tend to go to the non-referred box more often because they think that that's the one that she's labeling. But in the case where she saw the switch happen, so now she knows what's everywhere, infants will just go to the box that she's referring to figure out the location of the object that she's naming. So this is all in the second year of life, not those younger core knowledge ages that Liz was talking about. So this is evidence that infants consult speaker cues to intentions, but not of this special kind of intention that Grice was pointing out. So I'm just going to tell you about a couple-- well, one of my favorite studies, I just think that this is really neat-- where Ellen Markman's lab and then later replicated by Tomasello's lab, showed that it seems like children, by 30 months and then gotten down to 18 months by Grosse et al, actually do seem to care about not only the effects of communication, but also that the effects are produced by virtue of the message being understood. So you're probably wondering why there's a dirty sock on the screen. So in this study, children were presented with two objects. So they saw, on a table, there were two objects that were out of their reach, a truck and a dirty sock. And the idea was to elicit children to request an object. Now obviously, the children are going to request this object, which is what the researchers had intended in this case. So children tended to point at this object. So they had children producing their own communication, and then the experimenter responded to the communication of the infant in one of four different ways. So she either said, you asked for the truck? I'm going to give you the truck. So she expressed understanding of the request, and she provided the requested object. In a second case, she said, you asked for the truck? I'm going to give you the sock. So here, she expressed understanding, but refused to commit to the request, to give the thing the child had asked for. In a third case, the experimenter said, you asked for the sock? I'm going to give you the truck. So this is kind of a strange pragmatic situation where the experimenter actually gave the child the object they wanted, but she expressed a misunderstanding of the child's request. And this is the crucial case here. So the question is, obviously, in the case where the experimenter provides the thing and expresses understanding, children should be quite happy. And in this situation, they probably shouldn't be very happy. And what the experimenters measured was the amount of times that the child repeated the label of the object that they had asked for. And they also measured other behaviors as well. It's this situation that's important for understanding what kids care about in the context of this communicative interaction. So the question is, are infants perfectly happy here to take the truck and not complain because they got what they wanted? Or do they care about the experimenter giving them what they wanted because their communicated message had the proper effect on the addressee? And what they find is that when the correct object is provided, and the experimenter does not express understanding of the request, infants actually showed the most repetitions of the name of the requested object in this case. The researchers concluded from this that infants don't just care about getting what they want or using communication as a sort of instrumental means of getting people to act in certain ways, but rather as an intention to have their messages understood and delivered to the other person. So they care about the impact of their communicative signals on the understanding of the addressee, not just on the response of the addressee, which is sort of like the idea that Grice was talking about with speaker meaning. Similarly, when children are responding to other people's requests, they also care about this. So this is a study that I did in graduate school. It's sort of the opposite of the previous study, or the inverse. So we had an experimenter requesting from the child specific objects for doing specific tasks. This is 3-year-olds. So she would request something like a cup to pour a cup of water. The child had previous information from playing games with the objects with another experimenter that one of the cups was perfectly fine, and one of the cups was broken and had a big hole in the bottom. And so when the experimenter asked for a cup, the child could either give the cup the experimenter asked for, which was sometimes the perfectly good cup, or sometimes the experimenter requested the broken cup. And the question was, do children pay attention to the fit between the task and what the experimenter wanted when they're responding to her request? And what we found is that children were much more likely to give the requested object when the experimenter had requested a functional object than when she'd requested a dysfunctional object. So if I say, I need to pour a cup of water, can you give me that cup, and the cup is broken, children tended to go and get a better cup instead of the one that I had requested. Interestingly, though, even though children did this, it didn't seem to be enough for them to give the experimenter something good that she wanted. This is a graph of the comments that children made about the function of the objects depending on the kind of request that was made. And what you can see is that when a dysfunctional object was requested, when children tended to provide the functional object instead and not respond to the request, they were much more likely to try to explain their behavior to the experimenter and acknowledge what it was the experiment had originally wanted. So it seems like children in their own behavior, by at least three, which is a little older than the other studies, acknowledge what the speaker meant to ask for and explain while they're doing something else, even when they're not responding to that thing. And so, just briefly, in the last section, I'll talk about a third insight about communication that I think is really important, which comes from Clark, which is that communication is this joint action of accumulating common ground. And so in an example of an adult study about this, they showed adults pictures like this one of New York City, and they had people play this game in pairs. They had people who knew about New York City who were experts and who lived there, and then they had other people who didn't know anything about New York City. And they gave them a bunch of pictures. They told them that their goal was to-- the person who was the addressee had to sort the objects in the way that the communicator told them to. And so as a communicator, the communicator had to indicate, because they couldn't see the pictures, which picture the addressee should put where. And to do this, the communicator had to refer to things like the picture with the Empire State Building in it. And what they looked at was how people in these interactions accumulated shared knowledge or common ground over time and coordinated so their communication could become more efficient. And what they found is that when they had two New Yorkers who were interacting with each other, they tended to very quickly do the task because they could recognize immediately, that person's a New Yorker, and would just say things like, oh, it's the one with the Empire State Building. Whereas when they were talking to someone who wasn't a New Yorker, they had to sort of ground the conversation by establishing these common reference before getting to this point. And so they might start out with saying things like, oh, move the one with the building with the pointy top. And then eventually, they would coordinate on what the actual labels for these things were. And so Clark's idea is that communication is really efficient, and we're able to do it in the way we are because we're thinking about the common ground we have with other people. And we're able to figure out what kind of common ground we have with others fairly quickly. So if I'm interacting with one of you, I might assume, OK, we both know a lot about cognitive science already. So I can sort of start at a different level than I might start with, say, a child or someone who didn't know anything about this area. And so this is an important piece of human communication. It seems like-- and Liz talked a little about this, too-- it seems like infants are starting to show some signs of understanding the importance of common ground or shared knowledge in communication from a really early age. But importantly, this seems to come in around the same time that they recognize the importance of speakers facing each other in conversation and perhaps putting together some of their core knowledge domains, which is around nine months to a year. A lot of work in this domain has been done by Tomasello, who showed that around nine months, children were starting to do something different than they'd been doing earlier. So under nine months, children tend to-- they play with objects, they interact with people, but they don't seem to put these two things together. Whereas at nine months, what they start doing is paying attention, not just to objects or to people, but to objects and people at the same time in the context of a joint interaction. So they might do things like look at an object and then look at mom to make sure mom is also looking at it. Or they might point at things, not just because they want the things, as a younger child might do, but only to share attention with a parent or with someone else to point out that they're interested in something, and to make sure that the parent is looking and is interested in it as well. So Tomasello argues that this ability for engaging in joint attention, sharing attention with someone else, to an object or external referent in the world is the foundation of linguistic communication and also cooperation in other very important human activities. And it's certainly going to be important for an understanding of common ground, which is important for communication. So there's also evidence that around 12 months, infants start to use prior shared experience to interpret communication. I'm going to skip this, I think. But basically, the idea-- well, I'll just go through it quickly-- the idea is that infants will use the activities and objects that they've shared with people previously to figure out what the person means in a new case. So if the infant interacts with one communicator with one toy and with another communicator with a different toy separately, they will figure out the referent of a communicator's ambiguous request by thinking about what information they've shared in the past. So it seems like they're starting to track what kinds of knowledge is shared and what isn't in order to effectively communicate with others. OK. So there's a lot of other important questions about common ground, but I'm going to skip those now. I just want to come back to the question of why infants' understanding and children's understanding of communication is important for understanding human intelligence. So I think that one reason is that when you think about the insights these philosophers had and how they seemed to be realized in fairly young infants early-- children and infants early in development, when you look at these abilities that humans have and even that very young humans have, and you compare them to what nonhuman animals are doing, things look really different. And so, just briefly, in animal communication, we see some of the same kinds of features that we see in human communication. So for one thing, animal communication clearly has a social function in the same way that human communication does. So it's socially rich in a number of ways. I'm not going to over specific species, but just to gloss over it, most animal communication is sensitive to the presence of an audience. So it matters that someone's there to hear your communication. For example, species that produce alarm calls to warn others in their group of predators will rarely produce these calls if there are no other members of their species present. In many cases, the sensitivity to the audience depends on the identity of the audience. So for example, in some species such as ground squirrels, they call much more in the presence of their kin than when their kin are not around or when there are other individuals there. And in some cases, it seems like there might actually even be a sensitivity to the knowledge state of the audience. So this is looking a little more like the kind of sophisticated communication we see in humans. So a wild chimpanzee, for example, will produce an alarm call, will start to alarm call more if other chimpanzees come over who hadn't heard the original alarm call or who hadn't seen the predator. But if everyone around has already seen it, they'll reduce their alarm calling. So it seems like they tailor it to how much information others around them have had. However, despite these really interesting ways in which animal communication is social and complex, it's also limited in a number of ways that human communication is not. So the eliciting stimuli for these communicative signals tend to be fairly limited, as do the signals themselves. So there tends to be, say in vervet monkeys, one cry for a hawk and one cry for a snake, and they can't realize new signals for new kinds of predators and situations. As in humans, the receivers-- or in humans, the addressees-- acquire information from the signals of others, but there's no evidence that this information tells the receivers in animal species anything about the mental states of the communicator. And additionally, the communicator's signals can often cause a response in receivers that's beneficial to the communicator. For example, they get a bump to indirect fitness if they're kin run away and survive predators. So there can be benefits of communication, but there's no evidence that the communicator has any intention of changing the receiver's mental state. So there's really no evidence of this sort of speaker meaning or this special kind of communicative intention we see in humans, which is that a communicator doesn't just intend for the addressee to respond in a particular way, but intends for the audience to respond in a particular way by virtue of having understood the intention of that communication. And so this is just a, I think, particularly well-worded quote from Seyfarth and Cheney who say basically that listeners can acquire information from signallers, but the signallers themselves don't, in the human sense, intend to provide that information. So the reason, really, for this contrast between the human and animal cases that I think if we want to build a model of human communication, we need to differentiate it from other kinds of communicative models we could have. And we also need it to develop the types of abilities that human infants have, but taking into account the resources that infants start out with and the developments that we see in the first few years of life. So it's not going to be enough to have agents that can influence each other's responses or who can understand language, because it seems like the recognition of these more abstract features of communication and its causal effects on mental states is actually present fairly early on as well. And I think just relating this back to Liz's theory that infants might have these different systems for agents, understanding agents and their actions on objects and then for social beings and their interactions with each other. It seems like communication and this type of communicative intention that we see combines these two kinds of things, where you have an intention to produce an effect on someone else, but by virtue of them understanding the mental states that you have toward the world and as well as toward them. And maybe it's the combining of these different domains that helps infants put together their possibly human-unique, but maybe not, communication skills. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Tutorial_4_Ethan_Meyers_Understanding_Neural_Content_via_Population_Decoding.txt | The phone and content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ETHAN MEYERS: What I'm talking about today is neural population decoding, which is very similar to what Rebecca was talking about, except for I'm talking about now more on the single neuron level and also talk a bit about some MEG at the end. But kind of to tie it to what was previously discussed, Rebecca talked a lot about, at the end, the big catastrophe. Well, you don't know if something is not there in the fMRI signal because things could be masked when you're averaging over a large region as you do when you're recording from those bold signals. And when you're doing decoding on single neurons, that is not really an issue because you're actually going down and recording those individual neurons. And so while in general in hypothesis testing you can never really say something doesn't exist, here you can feel fairly confident that it probably doesn't, unless you-- I mean, you could do a Bayesian analysis. Anyway, all right. So kind of the very basic motivation behind what I do is, you know, I'm interested in all the questions the CBMM is interested in, how can we algorithmically solve problems and perform behaviors. And so, you know, basically as motivation, you know, as a theoretician, we might have some great idea about how the brain works. And so what we do is we come up with an experiment and we run it. And we record a bunch of neural data. And then at the end of it, what we're left with is just a bunch of data. It's not really an answer to our question. So for example, if you recorded spikes, you might end up with something called a raster where you have trials and time. And you just end up with little indications at what times did a neuron spike. Or if you did an MEG experiment, you might end up with a bunch of kind of waveforms that are kind of noisy. And so this is a good first step, but obviously what you need to do is take this and turn it into some sort of answer to your question. Because if you can't turn it into an answer to your question, there is no point in doing that experiment to begin with. So basically, what I'm looking for is clear answers to questions. In particular I'm interested in two things. One is neural content. And that is what information is in a particular region of the brain, and at what time. And the other thing I'm interested in is neural coding, or what features of the neural activity contain that information. And so the idea is, basically, if we can make recordings from a number of different brain regions and tell what content was in different parts, then we could, basically, trace the information flow through the brain and try to unravel the algorithms that enable us to perform particular tasks. And then if we can do that, we can do other things that the CBMM likes to do, such as build helpful robots that will either bring us drinks or create peace. So the outline for the talk today is I'm going to talk about what neural population decoding is. I'm going to show you how you can use it to get at neural content, so what information is in brain regions. Then I'm going to show how you can use it to answer questions about neural coding, or how do neurons contain information. And then I'm going to show you a little bit how you can use it to analyze your own data. So very briefly, a toolbox I created that makes it easy to do these analyses. All right, so the basic idea behind neural decoding is that what you want to do is you want to take neural activity and try to predict something about the stimulus itself or about, let's say, an animal's behavior. So it's a function that goes from neural activity to a stimulus. And decoding approaches have been used for maybe about 30 years. So Rebecca was saying MBPA goes back to 2001. Well, this goes back much further. So in 1986, Georgopoulos did some studies with monkeys showing that he could decode where a monkey was moving its arm based on neural activity. And there was other studies in '93 by Matt Wilson and McNaughton. Matt gave a talk here, I think, as well. And what he tried to do is decode where a rat is in a maze. So again, recording from the hippocampus, trying to tell where that rat is. And there's also been a large amount of computational work, such as work by Selinas and Larry Abbott, kind of comparing different decoding methods. But despite all of this work, it's still not widely used. So Rebecca was saying that MVPA has really taken off. Well, I'm still waiting for population decoding in neural activity to take off. And so part of me being up here today is to say you really should do this. It's really good. And just a few other names for decoding is MVPA, multi variant pattern analysis. This is the terminology that people in the fMRI community use and what Rebecca was using. It's also called read out. So if you've heard those terms, it kind of refers to the same thing. All right, so let me show you what decoding looks like in terms of an experiment with, let's say, a monkey. So here we'd have an experiment where we're showing the monkey different images on a screen. And so for example, we could show it a picture of a kiwi. And then we'd be making some neural recordings from this monkey, so we'd get out a pattern of neural activity. And what we do in decoding is we feed that pattern of neural activity into a machine learning algorithm, which we call pattern classifiers. Again, you've all heard a lot about that. And so what this algorithm does, is it learns to make an association between this particular stimulus and this particular pattern of neural activity. And so then we repeat that process with another image, get another pattern of neural activity out. Feed that into the classifier. And again, it learns that association. And so we do that for every single stimulus in our stimulus set. And for multiple repetitions of each stimulus. So you know, once this association is learned, what we do is we use the classifier or test the classifier. Here we show another image. We get another pattern of neural activity out. We feed that into the classifier. But this time, instead of the classifier learning the association, it makes a prediction. And here it predicted the kiwi, so we'd say it's correct. And then we can repeat that with a car, get another pattern of activity out. Feed it to the classifier, get another prediction. And this time the prediction was incorrect. It predicted a face, but it was actually a car. And so what we do is we just note how often are predictions correct. And we can plot that as a function of time and kind of see the evolution of information as it flows through a brain region. All right, so in reality, what we usually do is actually we run the full experiment. So we actually have collected all the data beforehand. And then what we do is we split it up into different splits. So here we had, you know, this experiment, let's say, was faces and cars or something. So we have different splits that have two repetitions of the activity of different neurons do two faces and two cars, and there's three different splits. And so what we do is we take two of the splits and train the classifier, and then have the remaining split and test it. And we do that for all permutations of leaving out a different test split. So you all heard about cross-validation before? OK. One thing to note about neural populations is when you're doing decoding, you don't actually need to record all the neurons simultaneously. So I think this might be one reason why a lot of people haven't jumped on the technique because they feel like you need to do these massive recordings. But you can actually do something what's called pseudo populations, where you build up a virtual population that you pretend was recorded simultaneously but really wasn't. So what you do with that is you just, if on the first day you recorded one neuron, and the second day you recorded the second neuron, etc. What you can do is you can just randomly select, let's say, one trial when a kiwi was shown from the first day, another trial from the second day, et cetera. You randomly pick them. And then you can just build up this virtual population. And you can do that for a few examples of kiwis, a few examples of cars. And then you just train and test your classifier like normal. But this kind of broadens the applicability. And then you can ask questions about what is being lost by doing this process versus if you had actually done the simultaneous recordings. And we'll discuss that a little bit more later. So I'll give you an example of one classifier, again, I'm sure you've seen much more sophisticated and interesting methods but I'll show you a very basic one that I have used a bit in the past. It's called the maximum correlation coefficient classifier. It's, again, very similar to what Rebecca was talking about. But all you do is-- let's say this is our training set. So we have four vectors for each image, each thing we want to classify. And all we're going to do is we're going to take the average across neurons to reduce these four vectors into a single factor for each stimulus. OK, so if we did that we'd get one kind of prototype of each of the stimuli. And then to test the classifier, all we're going to do is we're going to take a test point and we're going to do the correlation between this test point and each of the kind of prototype vectors. Whichever one has the highest correlation, we're going to say that's the prediction. Hopefully pretty simple. The reason we often use fairly simple classifiers, such as the maximum correlation coefficient classifier, is because-- or at least one motivation is because it can be translated into what information is directly available to a downstream population that is reading the information in the population you have recordings from. So you could actually view what the classifier learns as synaptic weights to a neuron. You could view the pattern of activity you're trying to classify as the pre-synaptic activity. And then by doing this dot product multiplication, perhaps pass through some non-linearity, you can kind of output a prediction about whether there is evidence for a particular stimulus being present. All right, so let's go into talking about neural content, or what information is in a brain region and how it needs decoding to get at that. So as motivation, I'm going to be talking about a very simple experiment. Basically, this experiment involves a monkey fixating on a point for-- well, through the duration of the trial. But first, there's a blank screen. And then after 500 milliseconds, up is going to come a stimulus. And for this experiment, there is going to be 7 different possible stimuli that are shown here. And what we're going to try to decode is which of these stimuli was present on one particular trial. And we're going to do that as a function of time. And the data I'm going to use comes from the inferior temporal cortex. We're going to look at 132 neuron pseudo populations. This was data recorded by Ying Jang in Bob Desimone's lab. It's actually part of a more complicated experiment but I've just reduced it here to the simplest kind of bare bones nature. So what we're going to do is we're going to basically train the classifier on one time point with the average firing rate in some bin. I think in this case it's 100 milliseconds. And then we're going to test at that time point. And then I'm going to slide over by a small amount and repeat that process. So each time we are repeating training and testing the classifier. Again, 100 milliseconds sampled every 10 seconds, or sliding every 10 seconds. And this will give us a flow of information over time. So during the baseline period we should not be able to decode what's about to be seen, unless the monkey is psychic, in which case either there is something wrong with your experiment, most likely. Or you should go to Wall Street with your monkey. But you know, you shouldn't get anything here. And then we should see some sort of increase here if there is information. And this is kind of what it looks like from the results. So this is zero. After here, we should see information. This is chance, or 1 over 7. And so if we try this decoding experiment, what we find is during the baseline, our monkey is not psychic. But when we put on a stimulus, we can tell what it is pretty well, like almost perfectly. Pretty simple. All right, we can also do some statistics to tell you when the decoding results are above chance doing some sort of permutation test where we shuffle the labels and try to do the decoding on shuffled labels where we should get chance decoding performance. And then we can see where is our real result relative to chance, and get p values and things like that. It's pretty simple. How does this stack up against other methods that people commonly use? So here's our decoding result. Here's another method. Here I'm applying an ANOVA to each neuron individually and counting the number of neurons that are deemed to be selective. And so what you see is that there's basically no neurons in the baseline period. And then we have a huge number. OK, so it looks pretty much identical. We can compute mutual information on each neuron and then average that together over a whole bunch of neurons. Again, looks pretty simple. Or similar, I should say. Or we can compute a selectivity index. Take the best stimulus, subtract from the worst stimulus, divide by the sum. Again, looks similar. So there's two takeaway messages here. First of all, why do decoding if all the other methods work just as well? And I'll show you in a bit, they don't always. And then the other take away message though is as a reassurance, it is giving you the same thing, right? So you know we're not completely crazy. It's a sensible thing to do in the most basic case. One other thing decoding can give you that these other methods can't is something called a confusion matrix. So a confusion matrix, Rebecca kind of talked a little bit about related concepts, basically what you have is you have the true classes here. So this is what was actually shown on each trial. And this is what your classifier predicted. So the diagonal elements mean correct predictions. There actually was a car shown and you predicted a car. But you can look at the off diagonal elements and you can see what was commonly made as a mistake. And this can tell you, oh, these two stimuli are represented in a similar way in a brain region, where the mistakes are happening. So another kind of methods issue is, what is the effect of using different classifiers? If the method is highly dependent on the classifier you use, then that's not a good thing because you're not telling yourself anything about the data, but you're really telling you something about the method you use to extract that data. But in general, for at least simple decoding questions, it's pretty robust to the choice of classifier you would use. So here is the maximum correlation coefficient classifier I told you about. Here's a support vector machine. You can see like almost everything looks similar. And like when there's something not working as well, it's generally a slight downward shift. So you get the same kind of estimation of how much information is in a brain region flowing as a function of time. But maybe your absolute accuracy is just a little bit lower if you're not using the optimal method. But really, it seems like we're assessing what is in the data and not so much about the algorithm. So that was decoding basic information in terms of content. But I think one of the most powerful things decoding can do is it can decode what I call abstract or invariant information where you can get an assessment of whether that's present. So what does that mean? Well, basically you can think of something like the word hello. It has many different pronunciations in different languages. But if you speak these different languages, you can kind of translate that word into some sort of meaning that it's a greeting. And you know how to respond appropriately. So that's kind of a form of abstraction. It's going from very different sound concepts into some sort of abstract representation where I know how to respond appropriately by saying hello back in that language. Or another example of this kind of abstraction or invariance is the invariance of the pose of a head. So for example, here is a bunch of pictures of Hillary Clinton. You can see her head is at very different angles. But we can still tell it's Hillary Clinton. So we have some sort of representation of Hillary that's abstracted from the exact pose of her head, and also abstracted from the color of her pantsuit. It's very highly abstract, right? So that's pretty powerful to know how the brain is dropping information in order to build up these representations that are useful for behavior. And I think if we were, again, going to build intelligent robotic system, we'd want to build it to have representations that have become more abstract so it can perform correctly. So let's show you the example of how we can assess abstract representations in neural data. What I'm going to look at is position invariance. So this is similar to a study that was done in 2005 by Hung and Kreiman in Science. And what I'm going to do here is I'm going to train the classifier with data at an upper location. So in this experiment, the stimuli was shown at three different locations. So on any given trial, one stimulus was shown at one location. And these three locations were used, so the 7 objects were all shown at the upper location, or at the middle, at the lower. And here I'm training the classifier using just the trials when the stimuli was shown in the upper location. And then what we can do is we can then test the classifier on those trials where the stimuli were just shown at the lower location. And we can see, if we train at the upper location, does it generalize to the lower location. And if it does, it means there is a representation that's invariant to position. Does that make sense to everyone? So let's take a look at the results for training at the upper and testing at the lower. They're down here. So here again, I'm training at the upper location. And this is the results from testing at the lower. Here is chance. And you can see we're well above chance in the decoding. So it's generalizing from the upper location to the lower. We can also train at the upper and test at the same upper, at the middle location. And what we find is this pattern of results. So we're getting best results when we train and test at exactly the same position. But we can see it does generalize to other positions as well. And so we can do this full permutations of things. So here we trained at the upper, we could also train at the middle, or train at the lower location. And here if we train at the middle, we get the best decoding performance when we decode at that same middle. But again, it's generalizing to the upper and lower locations, and the same for training at lower. Get the best performance testing lower, but it again generalizes. So if you want to just conclude this one mini study here, you know, information in IT is position invariant but not you know 100%. So we can use this technique. I'll show you a few other examples of how it can be used in slightly more powerful ways, maybe, or to answer slightly more interesting questions. So what another question we might want to ask, actually we did ask in this paper that just came out, was about the question of pose invariant identity information, so that same question about can a brain region respond to Hillary Clinton regardless of where she's looking. And so this is data recorded by Winrich Freiwald and Doris Tsao. Winrich probably already talked about this experiment. But what they did was they had the face system here where they found these little patches through fMRI that respond more to faces than other stimuli. They went in and they recorded from these patches. And in this study that we're going to look at, they did a-- they used these stimuli that had 25 different individuals shown at eight different head orientations. So this is Doris at eight different head orientations, but there were 24 other people who also were shown. And so what I'm going to try to do is decode between the 25 different people and see, can it generalize if I train at one orientation and test at a different one. And the three brain regions we're going to use is the most posterior region. So in this case, the eyes out here, this is like V1. This is the ventral pathway. So the most posterior region, we can combine ML and MF. We compare that to AL and to AM. I'm going to see how much position variance is there. So again, like I said, let's start by training on the left profile and then we can test on the left profile in different trials. Or we can test on a different set of images where the individuals were looking straight forward. So here are the results from the most posterior region, ML/MF. What we see is if we train in the left profile and test on the left profile here, we're getting results that are above chance, as indicated by the lighter blue trace. But if we train on the left profile and test in the straight results, we're getting results that are at chance. So this patch here is not showing very much pose invariance. So let's take a look at the rest of the results. So this is ML/MF. If we look at AL, what we see is, again, there's a big advantage for training and testing at that same orientation. But now we're seeing generalization to the other orientations. You're also seeing this "U" pattern where you're actually generalizing better from one profile to the opposite profile, which was reported in some of their earlier papers. But yeah, here you're seeing, statistically, that is above chance. Now it's not huge, but it's above what you'd expect by chance. And if we look at AM as well, we're seeing a higher degree of invariance, again, a slight advantage to the exact pose, but still pretty good. Again, this "U" a little bit but yeah, we're going to the back of the head. So what would that tell you, the fact that it's going to the back of the head, tells you it's probably representing something about hair. What I'm going to do next, rather than just training at the left profile, I'm going to take the results of training at each of the profiles and either testing at the same or testing at a different profile. And then I'm going to plot it as a function of time. So here are the results of training and testing at the same pose. So the non-invariant case. This is ML/MF. And this AL and AM. So this is going from the back of the head anterior. And what you see is there is a kind of an increase in this pose-specific information. Here the increase is fairly small. But there is just generally more information as you're going down. But the big increase is really in this pose invariant information. When you train at one location and test at another, that's these red traces here. And here you can see it's really accelerating a lot. It's really that these areas downstream are maybe pooling over the different poses to create opposing invariant representation. So to carry on with this for general concept of testing invariant representations or abstract representations, let me just give you one more example of that. Here was one of my earlier studies. What I did was this study was looking at categorization. It was a study done in Earl Miller's lab. David Friedman collected the data. And what they did was they trained a monkey to group a bunch of images together and called them cats. And then to group a number of images together and called them dogs. It wasn't clear that the images necessarily were more similar to each other within a category versus out of the category. But through this training, the monkeys could quite well group the images together in a delayed match to sample task. And so what I wanted to know was, is there information that is kind of about the animal's category that is abstracted away from the low level of visual features. OK, so was this learning process, did they build neural representations that are more similar to each other? So what I did here was I trained the classifier on two of the prototype images. And then I tested it on a left out prototype. And so if it's making correct predictions here, then it is generalizing to something that would only be available in the data if the monkey had-- due to the monkey's training. Modulo any low level compounds. And so here is decoding of this abstract or invariant information from the two areas. And what you see, indeed, there seems to be this kind of grouping effect, where the category is represented both in IT and PFC in this abstract way. So the same method can be used to assess learning. So just to summarize the neural content part, decoding offers a way to clearly see what information is there and how it is flowing through a brain region as a function of time. We can assess basic information and often it yields similar results to other methods. But we can also do things like assess abstract or invariant information, which is not really possible with other methods as far as I can see how to use those other methods. So for neural coding, my motivation is the game poker. This one study I did. Basically, when I moved to Boston I learned how to play Texas Hold'em. It's a card game where, you know-- it's a variant of poker, I'm sure most of you know, I didn't know the rules before but I learned the rules. And I could play the game pretty successfully in terms of at least applying those rules correctly, not necessarily in terms of winning money. But I knew what to do. And prior to that, I had known other games like Go Fish, or War, or whatever. And me learning how to play poker did not disrupt my ability to play go fish. I was still bad at that as well. So somehow this information that allowed me to play this game had to be added into my brain if we believe brains cause behavior. And so in this study, we're kind of getting at that question, what changed about a brain to allow it to perform a new task? And so to do this in an experiment with monkeys, basically, they used a paradigm that had two different phases to it. The first phase, what they did, was they had a monkey just do a passive fixation task. So what the monkey did was, there would be a fixation dot that came up. Up would come a stimulus. There would be a delay. There would be a second stimulus. And there would be a second delay. And then there would be a reward. And the reward was given just for the monkey maintaining fixation. The monkey did not need to pay attention to what the stimuli were at all. And on some trials the stimuli was the same. On other trials, they were different. But the monkey did not need to care about that. So monkey does this passive task. They record like over 750 neurons from the prefrontal cortex. And then what they did was they trained the monkey to deal with delayed match to sample task. And the delayed match to sample task ran very similar. So it had a fixation. There was a first stimulus. There was a delay, a second stimulus, a second delay. So up to this point, the sequence of stimuli was exactly the same. But now after the second delay, up came a choice target, a choice image, and the monkey needed to make a saccade to the green stimulus if these two stimuli were matches. And needed to make a saccade to the blue stimulus if they were different. And so what we wanted to know was when the monkey is performing this task, it needs to remember the stimuli and whether they were matched or not, is there a change in the monkey's brain. And so the way we're going to get at this is, not surprisingly, doing a decoding approach. And what we do is we're going to use the same thing where we train to classify at one point in time, test, and move on. And what we should find is that we're going to try to decode whether to stimuli matched or did not match. And so at the time when the second stimulus was shown, we should have some sort of information about whether it was a match or non-match if any information is present. And we can see, was that information there before when the monkey was just passively fixating, or does that information come on only after training. So here is a schematic of the results for decoding. It's a binary task, whether a trial was a match or a non-match. So chance is 50% if you were guessing. This light gray shaded region is the time when the first stimuli came on. This second region is the time the second stimulus came on. And here is where we're kind of going to ignore, this was either the monkey was making a choice or got a juice reward. We just ignore that. So let's make this interactive. How many people thought there was-- or think there might be information about whether the two stimuli match or do not match prior to the monkey doing the tasks, so just in the pacification task? Two, three, four, five-- how many people think there was not? OK, I'd say it's about a 50/50 split. OK, so let's look at the passive fixation task. And what we find is that there really wasn't any information. So there's no blue bar down here. So as far as the decoding could tell, I cannot tell whether the two stimuli match or not match in the passive fixation. What about in the active delay match to sample task, how many people think-- it would be a pretty boring talk if there wasn't. what area? We're talking about dorsolateral-- actually, both dorsa and ventra lateral prefrontal cortex. Yeah, indeed there was information there. In fact, we could decode nearly perfectly from that brain region. So way up here at the time when the second stimulus was shown. So clearly performing the task, or learning how to perform the task, influenced what information was present in the prefrontal cortex. I'm pretty convinced that this information is present and real. Now the question is, and why I'm using this as an example of coding, how did this information get added into the population. We believe it's there for real and probably contributing to behavior it's a pretty big effect. All right, so here is just some single neuron results. What I've plotted here is this is a measure of how much of the variability of a neuron is predicted about whether a trial is match or non-match. And I've plotted each dot as a neuron. I've plotted each neuron at the time where it had this maximum value of being able to predict whether a trial is match or non-match. And so this is the passive case. And so this is kind of a null distribution because we didn't see any information present about match or non-match in the passive case. When the monkey was performing the delayed match to sample task, what you see is that there's kind of a small number of neurons that become selective after the second stimulus is shown. So it seems like a few neurons are carrying a bunch of the information. Let's see if we can quantify this just maybe a little better using decoding. So what we're going to do is we're going to take the training set and we're going to do an ANOVA to find, let's say, the eight neurons that carry the most information out of the whole population. So the 750 neurons, let's just find the eight that had the smallest p value in an ANOVA. And so we can find those neurons. And we can keep them. And we can delete all the other neurons. And then now we found those neurons, we'll also go to the test set and we'll delete those neurons. And now we'll try doing the whole decoding procedure on the smaller population. And by deleting the neurons on the training set, we're not really biasing our results when we start doing the classification. So here are the results using all 750 neurons that I showed you before. And here are the results using just the eight best neurons. And what you can see is that the eight best neurons are doing almost as well as using all 750 neurons. Now I should say, there might be a different eight best at each point in time because I'm shifting that bin around. But still, at any one point in time there are eight neurons that are really, really good. So clearly there is kind of this compact or small subset of neurons that carry the whole information of the population. Once you've done that, you might not want to know the flip of that, how many redundant neurons are there that also carry that information. So here are the results, again, showing all 750 neurons as a comparison. And what I'm going to do now is I'm going to take those eight best neurons, find them in the training set, throw them out. I'm going to also throw another 120 of the best neurons just to get rid of a lot of stuff. So I'm going to throw out the best 128. And then we'll look at the remaining neurons and see, is there redundant information in those neurons. It's still like 600 neurons or more. And so here are the results from that. What you see is that there is also redundant information in this kind of weaker tail. It's not quite as good as the eight best or not as high decoding accuracy, but there is redundant information to it. Just to summarize this part, what we see here is that there is a few neurons that really became highly, highly selective due to this process. So we see that there's a lot of information in this small, compact set. Here are the results from a related experiment. This was in a task where the monkey had to remember the spatial location of a stimuli rather than what an image was, like a square or circle. But anyway, small detail. Here's this big effect of this is match information, this is non-match information being decoded. So these are the decoding results that I showed you before. Here's an analysis where an ROC analysis was done on this data. So for each neuron, they calculated how well does an individual neuron separate the match and the non-match trials. And again, pre and post training. And what you see is here, they did not see this big split that I saw with the decoding. And this was published. So the question is, why did they not see it. And the reason is because there were only a few neurons that were really highly selective. That was enough to drive the decoding but it wasn't enough if you averaged over all the neurons to see this effect. So essentially, there's kind of like two populations here. There's a huge population of neurons that did pick up the match information, or picked it up very weakly. And then there's a small set of neurons that are very selective. And so if you take an average of the nonselective population, it's just here. Let's say this is the pre-training population. If you take an average of post-training over all the neurons, the average would shift slightly to the right. But it might not be very detectable from the pre-training amount of information. But if you have weights on just the highly selective neurons, you see a huge effect. So it's really important that you don't average over all your neurons but you treat the neurons as individuals, or maybe classes, because they're doing different things. So the next coding question I wanted to ask was, is information contained in what I call a dynamic population code. OK, so let me explain what that means. If we showed a stimulus, such as a kiwi, which I like showing, we saw that there might be a unique pattern for that kiwi. And that pattern is what enables me to discriminate between all the other stimuli and do the classification. But it might turn out that there's not just one pattern for that kiwi, but there's actually a sequence of patterns. So if we plotted the patterns in time, they would actually change. So it's a sequence of patterns that represents one thing. And this kind of thing has been shown a little bit. And actually now it's been shown a lot. But when I first did this in 2008, the kind of one study I knew of that kind of showed this was this paper by Ofer Mazor and Gilles Laurent where they did kind of the PCA analysis. And this is in like the locusts, I think, olfactory bulb. And they showed that there were these kind of trajectories in space where a particular odor was represented by maybe different neurons. And again, I had a paper in 2008 where I examined this. And there's a review paper by King and Dehaene in 2014 about this. And there's a lot of people looking at this now. So how can we get at this kind of thing in decoding? What you can do is you can train the classifier at one point in time, and test it at a point in time like we were doing before. But you can also test at other points in time. And so what happens is if you train at a point in time that should have the information, and things are contained in a static code where there's just one pattern, then if you test at other points in time, you should do well. Because you capture that pattern where there's good information, you should do well at other points in time. However, if it's a changing pattern of neural activity, then when you train at one point in time, you won't do well at other points in time. Does that make sense? So here are the results-- if that will go away. Let me just orient you here. So this is the same experiment, you know, time of the first stimulus, time of the second stimulus, chance. This black trace is what we saw before that I was always plotting in red. This is the standard decoding when I trained and tested at each point in time. This blue trace is where I train here and I tested all other points in time. So if it's the case that there's one pattern coding the information, what you're going to find is that as soon as that information becomes present, it will fill out this whole curve. Conversely, if it's changing, what you might see is just a localized information just at one spot. So let's take a look at the movie, if that moves out of the way. OK, here is the moment of truth. Information is rising. And what you see in this second delay period is clearly we see this little peak moving along. So it's not that there's just one pattern that contains information at all points in time. But in fact, it's a sequence of patterns that each contain that information. So here are the results just plotted in a different format. This is what we call a temporal cross training plot because I train at one point and test at a different point in time. So this is the time I'm testing the classifier. This is the time I'm training the classifier. This is the passive fixation stage, so there was no information in the population. And this is just how I often plot it. What you see is there's this big diagonal band. Here you see it's like widening a bit so it might be hitting some sort of stationary point there. But you can see that clearly there's these dynamics happening. And we can go and we can look at individual neurons. So these are actually the three most selective neurons. They're not randomly chosen. Red is the firing rate to the non-match trials. Blue is the firing rate to the match trials. This neuron has a pretty wide window of selectivity. This other neuron here has a really small window. There's just this little blip where it's more selective or has a higher firing rate to not match compared to match. And it's these neurons that have these little kind of blips that are giving rise to that dynamics. Here's something else we can ask about with the paradigm of asking coding questions. What we're going to do here is we're going to try a bunch of different classifiers. And here, you know, these are some questions that kind of came up. But can we tweak the classifier to understand a little bit more about population code. So here is a fairly simple example. But I compared three different classifiers. And the question I wanted to get at was, is information coded in the total activity of a population. Or is it coded more so in the relative activity of different neurons. So you know, in particular, in the face patches, we see that information of all neurons increases to faces. But if you think about that from a-- or maybe not information, but the firing rate increases to all faces. But if the firing rate increases to all faces, you've lost dynamic range and you can't really tell what's happening for individual faces. So what I wanted to know was, how much information is coded by this overall shift versus patterns. So what I did here was I used a Poisson Naive Bayes classifier, which takes into account both the overall magnitude and also the patterns. I used a classifier minimum angle that took only the patterns into account. And I used a classifier called the total population activity that only took the average activity of the whole population. This classifier's pretty dumb, but in a certain sense, it's what fMRI is doing, just averaging all your neurons together. So it's a little bit of a proxy. There's paper, also, by Elias Issa and Jim DiCarlo where they show that fMRI is actually fairly-- or somewhat strongly correlated with the average activity of a whole population. So let's see how these classifiers compare to each other to see where the information is being coded in the activity. Again, I'm going to use this study from Doris and Winrich where we're going to be looking at the pose specific phase information, just as an example. So this is decoding those 25 individuals when they're shown, trained, and tested that exact same head pose. And so what we see is we see that when we use the Poisson Naive Bayes classifier that took the pattern and also the total activity into account, and when we used the classifier that took just the pattern into account, the minimum angle, we're getting similar results. So the overall activity was not really adding much. But if you just use the overall activity by itself, it was pretty poor. So this is, again, touching on something about what Rebecca said, when you start averaging, you can lose a lot. And so you might be blind to a lot of what's going on if you're just using voxels. There is reasons to do invasive recordings. All right, and I think this might be my last point in terms of neural coding. But this is the question of the independent neuron code. So is there more activity if you took into account the joint activity of all neurons simultaneously, so if you had simultaneous recordings and took that into account, versus the pseudo populations I'm doing where you are treating each neuron as if they were statistically independent. And so this is a very, very simple analysis. Here I just did the decoding in an experiment where we had simultaneous recordings and compared it to using that same data but using pseudo populations on that data, using very simple classifiers. And so here are the results. What I found was that in this one case there was a little bit extra information in the simultaneous recordings as compared to the pseudo populations. But you know, it wouldn't really change many of your conclusions about what's happening. It's like, you know, maybe a 5% increase or something. And then this has been seen in a lot of the literature. This is the question of temporal precision or what is sometimes called temporal coding. What happens, you know, some of the experiments I was using 100 millisecond bin, sometimes I was using 500. What happens when you change the bin size? What happens, this is pretty clear, again, from a lot of studies that I've done, when you increase the bin size, generally the decoding accuracy goes up. What you lose is temporal precision, because now you're blurring over a much bigger area. So in terms of your understanding what's going on, you have to find the right point between having a very clear result by having a larger bin versus you caring about the time information and using a smaller bin. And I haven't seen that I need like one millisecond resolution or a very complicated classifier that's taking every single spike time into account to help me. But again, I haven't explored this as fully as I could. So it would be interesting for someone to use a method [INAUDIBLE] that people really love to claim that things are coded in patterns in time. You know, if you want to, go for it. Show me it. I've got some data available. Build a classifier that does that and we can compare it. But I haven't seen it yet. So a summary of the neural coding. Decoding allows you to examine many questions, such as is there a compact code. So is there just a few neurons that has all the information. Is there a dynamic code. So is the pattern of activity that's coding information changing in time. Are neurons independent or is there more information coded in their joint activity. And what is the temporal precision. And this is, again, not everything, there are many other questions you could ask. Any other questions about the neural coding? Just a few other things to mention. So you know, I was talking all about, basically, spiking data. But you can also do decoding from MEG data. So there was a great study by Leyla where she tried to decode from MEG signals. Here's just one example from that paper where she was trying to decode which letter of the alphabet, or at least 25 of the 26 letters, was shown to a subject, a human subject in an MEG scanner. You know, see is very nice, you know, people are not psychic either. And then at the time, slightly after the stimulus is shown, you can decode quite well. And things are above chance. And then she went on to examine position invariance in different parts of the brain, the timing of that. So you can check out that paper as well. And as Rebecca mentioned, this kind of approach has really taken off in fMRI. Here are three different toolboxes you could use if you're doing fMRI. So I wrote a toolbox I will talk about in a minute to do neural decoding, and I recommend it for that. But if you're going to do fMRI decoding, you probably are better off using one of these toolboxes because they have certain things that are fMRI specific, such as mapping back to voxels that my toolbox doesn't have. Although you could, in principle, throw fMRI data into my toolbox as well. And then all these studies so far I've mentioned have had kind of structure where every trial is exactly the same length, as Tyler pointed out. And if you wanted to do something where it wasn't that structured that well, such as decoding from a rat running around a maze where it wasn't always doing things in the same amount of time, there's a toolbox that came out of Emory Brown's lab that should hopefully enable you to do some of those kinds of analyses. All right, let me just briefly talk about some limitations to decoding, just like Rebecca did with the downer at the end. So some limitations are, this is a hypothesis-based method. So we have specific questions in mind that we want to test. And then we can assess whether those questions are answered or not, to a certain degree. So that's kind of a good thing but it's also a down thing. Like if we didn't think about the right question, then we're not going to see it. So there could be a lot happening in our neural activity that we just didn't think to ask about. And so unsupervised learning methods might get at some of that. And you could see about how much is the variable of interest you're interested in, accounting for the total variability in a population. Also, I hinted at this throughout the talk, just because information is present doesn't mean it's used. The back of the head stuff might be an example of that or not, I don't know. But you just have to interpret the results and don't know the information there. Therefore, this is the brain region doing x. A lot of stuff can kind of sneak in. Timing information can be also really interesting. I've been exploring this summer. So if you can know the relative timing, when information is in one brain region versus another, it can tell you a lot about kind of the flow of information the computation that brain regions might be doing. So I think that's another very promising area to explore. Also, decoding kind of focuses on the computational level or algorithmic level, or really neural representations if you thought about Marr's three levels. It doesn't talk about this kind of implementational mechanistic level. So [INAUDIBLE] it's not one thing it can do. Now if you have the flow of information going through an area and you understand that well and what's being represented, I think you might be able to back out some of these mechanisms or processes of how that can be built up. But in and of itself, decoding doesn't give you that. Also, decoding methods, computationally, can be intensive. can take up to an hour. If you do something really complicated, it can take you a week to run something very elaborate. You know, sometimes it can be quick and you can do it in a few minutes, but it's certainly a lot slower than doing something like an activity index where you're done in two seconds and then you have the wrong answer right away. Let me just spend like five more minutes talking about this toolbox and then you can all go work on your projects and do what you want to do. So this is a toolbox I made called the neural decoding toolbox. There's a paper about it in Frontiers in Neuroinfomatics in 2013. And the whole point of it was to try to make it easy for people to do these analyses because [INAUDIBLE].. And so basically, here is like six lines of code that if you ran it would do one of those analyses for you. And not only is it six lines of code, but it's almost literally these exact same six lines of code. The only thing you'd, like, replace would be your data rather than this data file. And so what you can do, the whole idea behind it is it's a kind of open science idea, you know, I want more transparency so I'm sharing my code. If you use my code, ultimately, if you could share your data, that would be great because I think I wouldn't have been able to develop any of this stuff if people hadn't shared data with me. I think we'll make a lot more progress in science if we're open and share. There you go, I'm a hippy. And here's the website for the toolbox, www.readout.info. Just talk briefly a little bit more about the toolbox. The way it was designed is around four abstract classes. So these are kind of major pieces or objects that you can kind of swap in and out. They're like components that allow you to do different things. So for example, one of the components is a data source. This creates the training and test set of data. You can separate that out in different ways, like there's just a standard one but you can swap it out to do that invariance or abstract analysis. Or you can do things like, I guess, change the different binning schemes within that piece of code. So that's one component you can swap in and out. Another one are these preprocessors. What they do is they apply pre-processing to your training data, and then use those parameters that were learned on the training set to do some mechanics to the test set as well. So for example, when I was selecting the best neurons, I used a preprocessor that just eliminated-- found good neurons in the training set, just used those, and then also eliminated those neurons in the test set. And so there are different, again, components you can swap in and out with that. An obvious component you can swap in and out, classifiers. You could throw in a classifier that takes correlations into account or doesn't. Or do whatever you want here. You know, use some highly nonlinear or somewhat nonlinear thing and see is the brain doing it that way. And there's this final piece called cross validator. It basically runs the whole cross validation loop. It pulls data from the data source, creating training and test sets. It applies the future preprocessor. It trains the classifier and reports the results. Generally, I've only written one of these and it's pretty long and does a lot of different things, like gives you different types of results. So not just is there information loss but gives you mutual information and all these other things. But again, if you wanted to, you could expand on that and do the cross-validation in different ways. If you wanted to get started on your own data, you just have to put your data in a fairly simple format. It's a format I call raster format. It's just in a raster. So you just have trials going this way. Time going this way. And if it was spikes, it would just be the ones and zeros that happen on the different trials. If this was MEG data, you'd have your MEG actual continuous values in there. Again, trials and time. Or fMRI or whatever. fMRI might just be one vector if you didn't have any time. And so again, this is just blown up. This was trials. This is time. You can have the little ones where a spike occurred. And then what corresponds to each trial, you need to give the labels about what happened. So you'd have just something called raster labels. It's a structure. And you'd say, OK, on the first trial I showed a flower. Second trial I showed a face. Third trial I showed a couch. And these could be numbers or whatever you wanted. But it's just indicating different things are happening in different trials. And you can also have multiple ones of these. So if I want to decode position, I also have upper, middle, lower. And so you can use the same data and decode different types of things from that data set. And then there's this final information that's kind of optional. It's just raster site info. So for each site you could have just meta information. This is the recording I made on January 14 and it was recorded from IT. So you just define these three things and then the toolbox plug and play. So with some experience you should be able to do that. So that's it. I want to thank the Center for Brains, Minds, Machines for funding this work. And all my collaborators who collected the data or who worked with me to analyze it. And there is the URL for the toolbox if you want to download it. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_83_Tony_Prescott_Control_Architecture_in_Mammals_and_Robots.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. TONY PRESCOTT: Great pleasure to be in Woods Hole, my first visit here, had a wonderful swim in the sea yesterday. Sheffield Robotics is across both universities in Sheffield. And it's been founded since 2011, but we've really been doing robotics since the 1980s. And I joined them in 1989. And we do pretty much every different kind of robotics, but I'm going to talk about biomimetics. I also do what you might call cognitive robotics. And I collaborate with Giorgio Metta on that, but since he's speaking too, I'm going to focus on the more animal-like robots that we've been developing. So this is one of our latest projects. This is a small, autonomous, mobile robot, which is a commercial platform, which will be available in the UK, I think, next January. And there will hopefully be a developer program for people that are interested in helping to develop the intelligence for this robot. So this is at a conference we had in Barcelona last month. And you can see that it's a robot pet. And we've been focusing on giving it some effective communication abilities, responding particularly to touch. You can see that it's orienting to stimuli. It has stereo vision. It has stereo sound. And it can orient to visual stimuli and also to auditory stimuli. Here we're showing it a picture of itself on this magazine cover. And the goal is to demonstrate that we can, in a commercial robot that will cost less than $1,000, considerably less, some of the principles of how the brain generates behavior. So this robot is called MiRo. It is based on some high-level principles abstracted from what we know about how mammalian brains control behavior, so it's a relatively complex robot, 13 degrees of freedom, 3 arm processes corresponding to different levels, if you like, of the neuroaxis, the central nervous system. So once you start out with some general ideas, and questions, and issues about how we might learn from the biology in the brain, how to develop robots, and how we might use robots to help us understand the brain. And a central question I think that robotics can help us answer, and I think that's a core question in neuroscience, is what you might call the problem of behavioral integration. And the neuroscientist Ernest Barrington summarized this quite nicely. He said, "the phenomenon so characteristic of living organisms, so very difficult to analyze, the fact that they behave as wholes rather than as the sum of their constituent parts. Their behavior shows integration, a process unifying the actions of an organism into patterns that involve the whole individual." And this picture of a squirrel over here I think nicely demonstrates this. So of course this squirrel is leaping from one branch to the next. And you can see that every part of his body is coordinated and organized for this action, so you can see that his eyes looking straight ahead, his whiskers-- and I'll talk a lot more about whiskers-- pointing forward. His arms and his feet are all ready and they're there ready to catch his fall. Even his tail is angled and positioned to help him fly through the air. So it's the coordination of the different parts of the body, and the multiple degrees of freedom of the body, and the sensory systems in space and in time, which, I think, is a critical problem for biological control and also a problem for robots, which we're still struggling to address with our robots. And I want to give you two very general principles for thinking about how brains solve this problem. So many of you will have come across Rodney Brooks, yes, from MIT. And he's famous for, in robotics, the notion of layered control, which he called subsumption. And I think the ideas that he brought into robotics really changed how people thought about robots in the 1980s. But if we go back to the 1880s, John Hughlings Jackson, who was a British neurologist, proposed a similar idea, but with respect to the nervous system. So in the 1880s, people thought about the higher areas of the brain, particularly the cortex, as being about higher thought, and reasoning, and language, and not so much about perception and action. And Hughlings Jackson, I think, was revolutionary in his day in saying, that the highest motor senses represent over again in more complex combinations what the middle motor centers represent. In other words, he was saying that, the whole of the brain, all the way up, is about coordinating perception with action. And he described it in many senses as a layered system. He talked about how you could take off the top layers of the system and the competences of the lower layers remained intact, which, of course, is very much the idea of Rodney Brooks subsumption architecture. And some old studies, they did transection in animals like cats and rats-- demonstrate this nicely. So if you take a cat or a rat, particularly a rat, and you remove, in fact, all of the cerebral cortex, so if you make a slice here that takes away cortex, you get an animal that actually, to all appearances, looks fairly normal. It does motivated behavioral sequences, so it will get hungry. And it will look for food. And it will eat. If there's an appropriate mate nearby, it will look to have a sexual relationship. And it will fail in some challenges such as learning, and perhaps also in dexterous control, but in many ways, it will look normal. If you slice below the other part of the forebrain, the thalamus and the hypothalamus, you remove these areas, then you remove this capacity for motivated behavior, but you leave intact midbrain systems that can still generate individual actions. And if you remove parts of the midbrain, you leave intact, still, component movements, so for example, animals that can run on a treadmill. So we are, with our MiRo robot, loosely recapitulating this architecture, so we have three processors. And the idea with this robot, it's actually a part work, so you build it up. You get a magazine every week with a new part for the robot. And you build, essentially, a spinal robot first. And then you add a midbrain processor. Eventually you add a cortical processor, which gives with it some learning capacities, some pattern recognition, some navigation, so that's one principle, layered architecture, which seems to work both for biology, and perhaps in robotics. So second principle, and this goes back to another famous neuroscientist, this time Wilder Penfield, who is known to many people for his discovery of somatotopic maps in the brain, that if you stimulate, in the brain, in the area, the sensory area, then you find that people have experience of tickling on parts of the body. And adjacent parts of cortex correspond to adjacent parts of the body. And he found a similar homunculus in the motor area that you stimulate, and you get movement in adjacent parts of the body. And he also proposed another idea. And that was sort of a transencephalic dimension to nervous system organization. And that's to say that, down the midline of the central nervous system, there are a group of structures that don't seem to be specifically involved in specific aspects of perception and action, but seem to be about integration. And amongst them, particularly the basal ganglia, he noted as being important, and parts of the reticular formation. So Michael Frank was here talking to you about basal ganglia, so I'm not going to say much more about this, but this is just to point out, in an slice in the rat brain, these are the elements of the basal ganglia, particularly, the striatum is the input system. In the rat, the substantia nigra and part of the globus pallidus are the output systems. And then you have, in the rat brain, and also in our brains, you have massive convergence onto the input area, the caudate putamen it is also called, from the cortex and from the brain stem. So you have signals coming in from all over the brain to the striatum, which could be interpreted as a request for action. And then you have inhibition coming out from the output structures of the basal ganglia, here I'm showing it for the substantia nigra, going back to all of those areas of the brain. And this inhibition is tonic. And in order to have functional reaction, you have to remove the inhibition. So this is a system that can give you some of that behavioral integration that you need, the ability to ensure that you do one thing at a time. You do that quickly. You do that consistently. You dedicate all of your resources to the action that you want to do. Here's a little video of a rat. And I'm showing you some integrated behavior over time in an intact rat. So this is a rat exploring in a large container. And rats generally don't like open spaces, so when you first put the animal into this space, it will tend to stay near the walls. And it prefers this corner, which is dark. And of course, it's hungry too, so there's a dish of food here. And eventually, it gets up the courage to go out, collect a piece of food, and it will take it back into this dark corner to consume it. And one of the first models that we built was a model of basal ganglia operating as this, kind of, action-selection device. And with a simple Kepler robot, this is a robot that just really uses infrared sensors and a gripper arm. And we are using a model of the basal ganglia to control decision making about which actions to do at which time, and to generate sequencing of those actions. So as the need to stay close to walls diminishes, the robot, like the rat, goes and collects these cylinders. And it carries them back into the corners and deposits them. So a model of the central brain structures-- and I'm happy to discuss in more detail about how that model operates, and how similarities to the model that Michael will have described to you, but that's controlling the behavior switching, if you like, in this robot. So I spent some time working on this question of how central systems in the brain, particularly the basal ganglia, are involved in the integration of behavior, but I became frustrated with not understanding what were the signals coming in to the central brain structures and not understanding what effects those brain structures were having on the motor system of the animal. So I thought that what we needed to do was look at complete sensory motor loops. We needed to look at sensing and action, and how those interact. And in our Psychology Department, we have a neuroscience group that works mainly with rats, so it was natural for us to look at the rat. And in the rat, we know that one of the key perception systems is the vertebral system. So here you see, this is actually a pet rat, wandering around on my windowsill in my house in Sheffield. And the thing to notice is the whiskers here. And the whiskers are moving back and forth pretty much all the time that the rat is exploring. And we understand from nearly 100 years now of research that this system is very important for the rat to understand the environment. In fact, if it's completely dark, the rat would move around in much the same way. And it would be able to understand the world through touch pretty well, even in the absence of vision. So this is the same video, but now slow down 10 times, and just to show you these movements of the whiskers, and how quite precise they are, because the rat isn't just, in a stereotypical way, banging its whiskers against the floor. It is lightly touching the whiskers in places it will get useful information. And you can see, when he puts his head over the window sill here, the whiskers push forward, as if he knows that he's going to have to reach further forward if he's going to find anything. Here you see him exploring this wooden cup. And you can see light touches by the whiskers. And you can also see that the movement of the whiskers is being modulated by the shape of the surface that he's investigating, so there's some fairly subtle control happening here. And I think it's not too much to say that the way in which the rat controls its whiskers has almost the same richness as the way that we control our fingertips. So I'm interested in how this plays out in terms of a layered architecture story. And of course, many people study this system. The beauty of it is, if you're a neuroscientist, that you can look in the cortex. This is rat cortex here. And a huge area of rat cortex is dedicated to somatosensation, to touch, of which a large area is dedicated to whiskers. In fact, you zoom in, you can find this area called barrel cortex. And with the right kind of staining, you can find groups of cells which preferentially receive signals from individual whiskers, so for example, you can move one whisker here, and you can know exactly where you record in the barrel cortex to get a very strong response from that whisker. And this means that barrel cortex and the whisker system has become one of the prepared, preferred preparations in which to study the cortical microcircuit altogether, so people study this system to really understand how cortex operates. Now, if we think about this system as a pathway from the whiskers up to barrel cortex, we're really only capturing one element of what's going on in the vertebral system. And that's this pathway here from the vertebrae, via the trigeminal complex, goes by the thalamus up to sensory cortex. And this is probably where 9 out of 10 papers on this system are published, but actually, this system is only part of a looped architecture, or we might say, a layered architecture. And at each level of this layered architecture, there's a completed loop, so that sensing can affect action, so sensing on the vertebrae can affect the movement and control of the vertebrae. So there's a loop via the brainstem here, so that, directly from the trigeminal complex, signals come back to the facial nucleus, which is where the motor neurons are that move the whiskers. There's a loop via the midbrain here so that sensory signals ascend very quickly to the midbrain superior colliculus. And they come back to affect how the whiskers move. And then, of course, there's the loop via the cortex too, so there's essentially those three loops, at least, that we need to think about. So since 2003, we've been building different whiskered robots, the aim being to instantiate our theories about how whiskered control works in this layered architecture and demonstrate it in a robot platform. And often, actually, building a robot platform causes us to ask new questions that might not be obvious to you just by doing biological experiments or even by doing simulation. Before I show you some robots, just quickly show a little bit more about the rat and its whiskers. So we began thinking we could just build robots, but we quickly realized that we didn't know enough about how rats use their whiskers to do that. And that's partly because the experiments that had been done haven't been done with the purpose of building a whiskered robot. So when you try and build a whiskered robot, you have to ask questions like, how do the whiskers move? And when you look at a video like this filmed from above, this is with a high speed camera, you think, well, the whiskers are sweeping backward and forward, like this. But in fact, if you put a mirror just tilted down here and you see what happens, then it turns out to be a little bit different, so you see that the whiskers are going up and down as much as they are going backwards and forwards. So the whiskers are actually sweeping like this, and they're making a series of touches on the surface. And if you watch, you can see that the whiskers are sort of playing down on the surface, sort of in a sequence, quite quickly, so that information might be giving you details about the shape of the surfaces in your world. So we mainly look at the long whiskers. This is a rat that's running up an alley. And we put an unexpected object in the alley, which could be this aluminum rectangle, or it could be this plastic step. And what you see is, if the animal encounters something unexpected with its long whiskers, then it turns very quickly and investigates it. So the long whiskers are like the periphery of a sensory system that has a fovea. And the fovea at the center of that system is a set of short whiskers around the mouth, also the lips and the nose, so that you can sniff and smell the surface that you're investigating. So we can zoom in and see that sensory fovea, here you see these short whiskers that are being used to investigate this plastic puck, and the longer whiskers investigating around the outside. So we have recapitulated elements of this layered architecture in our robot. And this is-- these are the loops. And this was about five years ago, we built a system with a brain stem loop, and really, midbrain loop. And this is our robot Scratchbot, which is the first of the whiskered robots that we felt really was capturing whisking in the way that the rat does it. It's running at about 4 hertz, whereas the real rat is whisking from 8 to 12 hertz, but it's scaled up to be about four times rat size. And what it's doing here is, it's using the whiskers to orient to stimuli, so this is Martin Pearson from Bristol Robotics Lab. He's putting an object in the whisker field. And the robot is turning and orienting to the touch with the object. It's putting its short micro vibrissae, in fact, against the object and exploring it. Now to do that, to detect a stimulus on the whiskers and then turn is not a fantastically hard task. The main challenge is to work out where the whisker was in its sweep when you made contact with an object, because the whisker is sweeping back and forth, so if you want to know the location of the point of contact, you need to integrate the position of the whisker in its sweep, what you might call a theta signal, and the presence of the contact on the whiskers. And the coincidence of those two is detected in the brain. And we know that there are cells in the barrel cortex that respond to that coincidence. So we have in our robot a model of the super colliculus, which is the location in the brain which we think is involved in orienting. And in our model of the colliculus, we have a head-centered map, which looks for this coincidence between a cell encoding the position of the whisker in its sweep and a cell encoding a contact and makes a turn to orient and explore that position. And then if we want to actually create behavior, which is integrated over time, so if the robot was just to orient every time it touched something, that wouldn't be very animal-like, particularly, you don't want to orient every time you touch the ground, so you just want to orient when you touch important stimuli. So we put into our model the basal ganglia so that we can decide whether the contact we've just made is something we want to investigate or something that doesn't interest us so much. So we have a system now with a midbrain that does orienting, a basal ganglia that makes decisions about sequencing. And those two things together give us reasonably lifelike behavior in our robot, Scratchbot. And that's quite a lot of what we have running now on the new robot, MiRo, is this system. It's for orienting and exploring. And we can use it-- we use it here for tactile orienting, but of course, the same system can underlie orienting to sounds, if you can localize those, and space, and orienting to visual stimuli too. So it turns out that this isn't a complete solution to the problem of orienting for our whiskered robot. And that's because sometimes our robot would stop as it was moving around and just move and investigate a point in space where nothing was happening. We call that a ghost orient. And the problem is that the whiskers, because they're moving back and forth, they sometimes generate signals in the strain gauges that are detecting bending of the whisker. And sometimes those signals, just as a consequence of the movement and the mass of the whisker, are strong enough to be above threshold to generate an orient, so you get, if you like, these ghost-orienting movements towards stimuli that don't exist. And we know that rats don't make these kind of ghost orients, so something else must be going on in the brain. And one part of the brain that might be helping here is a region called the cerebellum, which, I'm not sure if you've covered that in the summer school, but the cerebellum, this large structure at the back of the right brain. One of its key functions seems to be to make predictions about sensory signals, and particularly, to be able to predict sensory signals that have been caused by your own movement. And there's a lovely experiment that's been done by Blakemore et al, where they put people into a scanner. And they investigated how they responded to tickling stimuli. So of course, if somebody tickles you, that can be quite amusing, but unfortunately, if you try to tickle yourself, it's really uninteresting. It doesn't work as a stimulus. And it's worth thinking about why it is that self-tickling is so unrewarding. And one of the reasons is that it's just not surprising. You know what's going to happen when you tickle yourself, whereas if somebody else is doing it, it's unexpected and surprising. So why is self-tickling unexpected? Why is it not surprising? It must be because the brain expects and anticipates the signal that it's going to get. And what Blakemore et al did was to show that the cerebellum really lights up when you try to tickle yourself, because it's estimating and predicting the sensory signal, and using that to cancel out, if you like, the signal that's coming from your skin. The same thing is happening in electric fish, which generate this broad electric field which they use for catching prey. And they need to be able to tell the difference between a distortion to the electric field caused by a prey animal and a distortion caused by their own movement, by swimming. And they do that by having a very large cerebellum. So we put a model of the cerebellum in our whiskered robot. And the cerebellum predicts the noise you might get due to the movement of the whiskers. And it learns online to accurately predict what noise signals you might get, and to cancel them out, so you get a much better signal-to-noise ratio in the robot. So we've dealt with whisking. And we've dealt with orienting. But as you saw with that rat on the windowsill, the whisker movements are really precise. And they're really controlled. And the rat seems to really care about how it's moving its whiskers and how it's touching. We call this active sensing. And if you look at these high-speed videos, you can see, for instance, this rat when it's exploring this perspex block. The whiskers aren't moving in a stereotype symmetric way. You can see that here, the whiskers on the right-hand side are really reaching round to try and reach the other side of the block. If you watch this rat here, you see that too. You've got asymmetry. And you'll see that, even as the rat comes up to the cylinder here, the whiskers at the front are pushing forward while the ones at the back are hardly moving at all, so there's some ability to control even the whiskers on one side of the head. And when you move your fingers, of course, there's some coupling between your finger movements. You can't move them entirely independently. And each of these whiskers has its own muscle, so there's a degree of independence in how the whiskers can move. And we find that when we record over long intervals. So this was a study-- [ELECTRONIC NOISE] --in which we recorded the whisking muscles using EMG. And that's the sound that you can hear as the rat explored. And we tracked the rat as he was moving around. And we showed that, whenever he came close to the edge of the box here, the whiskers would become asymmetric. And the whiskers that were furthest away from the wall would push round to try and touch the sides of the box. The whiskers that were close to the wall would barely move at all. So we want to put that kind of control into our robot. So I briefly want to come back to this question of how we decompose control. So in our original robot that was controlled by the basal ganglia, and it's collecting cans, we decompose behaviors into the different elements of behavior-- looking for a can, picking it up, carrying it to the wall, these sorts of things. And if we look in the ethology literature, we find that people have talked about these kinds of decompositions. There's a very famous paper by Baerends about the herring gull. And with the herring gull, there's this famous experiment where the egg rolls out the nest. And the bird will retrieve the egg with its bill and push it back into the nest. And it will do this same action really reliably and repeatedly. And it can do it with eggs of various size. It might even do it for a Coke can. And if you take the egg away during the movement, it will still complete the movement. And ethologists have called this a fixed action pattern, so it may be that behavior is decomposed into action patterns. And that's one of the ways, for instance, in which Rodney Brooks wants to decompose robot behavior. We decompose it into different things we might want the robot to do. And we can do that with our whiskered robots. Here's another one with its behavior decomposed into different kinds of, if you like, orienting behaviors and fixed action patterns. Another way to decompose behavior is to think about where your attention is, so where you put your attention might decide what you're going to do next. And for an animal that doesn't have arms, and of course most animals except humans and some primates don't usually use their forelimbs for much else other than locomotion. And they primarily are positioning their head and their face. And their main effector, then, is their mouth. So where you position your attention could determine what you're going to do next. So another way of decomposing control is to solve the attention problem first. And then once you solve that, the problem of what you're going to do is simplified. So in this robot, we're controlling it by deciding where its attention should go. And then the rest of the body kind of follows. When humans have special attention, of course, we explore that in the visual modality. And we look at the saccadic eye movements that people make. So in the famous experiment, Albert Yarbus had people looking at this picture and tracking where their eyes would look. And of course, we look at the socially-significant elements of the picture, people's faces and so on, not just arbitrary points of light, or corners, and so on. And we can actually calculate a saliency map for space and say what are the important parts of space for exploring and attending to. And we've taken that idea and transferred it into our model for understanding the rat. And we thought about tactile saliency maps, so can we, with a sense of touch, think about areas of the world which are important to explore and understand through touch? And can we use that to control the movement of our robot, or in this case, our simulation? So here, we have a form of emergent wall following, which is a consequence of the rat's spatial attention being driven by contact with vertical objects, which we-- we program it so that the vertical surfaces are salient and interesting. And it has this salient zone. And it tries to put its whiskers into the salient zone. And then here is a robot instantiating this. This is Ben Mitchinson, who's programmed many of these robots. And so what we're doing now is following this biologically-inspired orienting system to explore shapes. And in this case, he put his own face in front of the robot. And you can see the robot making light touches against his face and investigating it, looking-- making a series of, if you like, exploratory touches, somewhat like saccades, somewhat like what you might imagine a blind person would do if they were investigating your face to try and recognize you. And Mitra Hartmann from Northwestern has shown that you can take signals off these kinds of whiskers and reconstruct a face, so it should be possible from this to build up from the touches, the sequence of touches, a lot of rich information about the object that's being investigated. How much time? I need to finish. OK, let me just skip through. So we've been doing-- working on the cortex. We have a number of models of that, which I'd like to show you, but I want to just finish, just to make contact with John's talk, is that we've been doing, in our robots, tactile simultaneous localization and mapping. So this is our whiskered robot. And we have various models for this, some of which are more hippocampal-like. This one, I think, was more of an engineered model. But you can see the robot just using touch on these artificial whiskers, building up a map of its environment. These two lines show its dead-reckoning position and its calculated position. And just using touch, we can build up a reasonably accurate map of the world that we're exploring. So Giorgio will talk about the iCub. And I just wanted to mention that, in the work we're doing with Giorgio, we are very much trying to understand human cognition. I wrote a short article for New Scientist on the possibility that robots might one day have selves. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_73_Nancy_Kanwisher_Human_Auditory_Cortex.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. NANCY KANWISHER: Auditory cortex is fun to study, because very few people do it. So you study vision, you have to read hundreds of papers before you get off the ground. You study audition, you can read three papers, then you get to play. It's great. So there's consensus about tonotopy, I mentioned this before. This is an inflated brain, oriented like this top of the temporal lobe. High, low, high frequencies. OK. That's like retinotopy, but for primary auditory cortex. So this has been known forever. Many people have reported it in animals and humans. Oops, we'll skip the high and low sounds, right. So there are lots of claims about the organization of the rest of auditory cortex outside that. But basically, there's no consensus. You know, nobody knows how it's organized. So what we set out to do, was to take a very different approach from everything I've talked about so far. We said, let's kind of try to figure out how high level auditory cortex is organized. Not by coming up with one fancy little hypothesis, and a beautifully designed pair of contrast conditions to test each little hypothesis. What we usually do. Let's just scan people listening to lots of stuff, and use some data driven method to kind of shake the data. And see what falls out. OK. To be really technical about it. OK. So when I say we, this is really Sam Norman-Haignere did all of this. And as this project got more and more mathematically sophisticated, I got more and more taken over by Josh McDermott, who knows about audition. And knows a lot of fancy math, much more than I do. They're fabulous collaborators. OK. So basically, what do we do? The first thing you want to do, especially when you're using these data driven methods to broadly characterize a region of the brain, is a major source of bias in the structure you discover, is the stimuli you use. So with vision, you have a real problem. You can't just scan people looking at everything, because there's too much. Right? You know, can't keep people in the scanner for 20 hours. And so, you have a problem of how to choose it. And as your selection of stimuli shape what you find, and it's kind of a mess. With audition, it turns out that there's a relatively small number of basic level sounds that people can recognize. So if you guys all write down-- you don't have to do this, but just to illustrate. If you all write down three frequently heard sounds that are easily recognizable, that you encounter regularly in your life. The three sounds that you wrote down are on our list of 165. Because, in fact, there are not that many different sounds. And so we did this on the web. We played people sounds. And obviously depends on the grain, right. If it's this person's voice, versus that person's voice, there are hundreds of thousands. Right. But at the grain of person speaking, dog barking, toilet flushing, ambulance siren. At that grain, there's only a couple hundred sounds that everyone can pretty much recognize in a two second clip, and that they hear frequently. And that's really lovely, because it means we can scan subjects listening to all of them. And we don't have this selection bias. So we basically tile the space of recognizable, frequently-heard, natural sounds. And we scan subjects listening to all of it. OK so here are some of our sounds. [VIDEO PLAYBACK] It's supposed to either rain or snow. [END PLAYBACK] This is our list of frequency. Most common, man speaking. Second most common, toilet flushing. And so forth. [VIDEO PLAYBACK] Hannah is good at compromising. [VAROOM] [END PLAYBACK] So we pop subjects in the scanner, and we scan them while they listen to these sounds. [VIDEO PLAYBACK] [CLACK, CLACK, CLACK] [VAROOM] [END PLAYBACK] Anyway, you get the idea. [VIDEO PLAYBACK] [WATER RUSHING] [GASP] [END PLAYBACK] OK. So we scan them while they listen to these sounds. And then what we get is a 165 dimensional vector describing the response profile for each voxel. OK. So each voxel in the brain, we say how strong was the response to each of those sounds? And we get something like this. Everybody with me? Sort of? OK. So now what we do is, we take all of those voxels that are in greater suburban auditory cortex, which is just like a whole big region around, including but far beyond primary auditory cortex. Anything in that zone that responds to any of these sounds, is in the net. And we take all of those, and we put them into a huge matrix. OK. So this is now all of the voxels from auditory cortex, in 10 different subjects. OK, 11,000 voxels. And so we've got 11,000 voxels by 165 sounds. OK. So now the cool thing is, what we do is, we throw away the labels on the matrix. And just apply math. And say, what is the dominant structure in here? OK. And what I love about that is, this is a way to say, in a very, theory-neutral way, what are the basic dimensions of representation that we have in auditory cortex? Not, can I find evidence for my hypothesis. But, let's look broadly and let the data tell us what the major structure is in there. OK. So basically what we do is, factorise this matrix. And probably half of you would understand this better than me. But just to describe it, basically, we do a-- it's not exactly independent component analysis, but it's a version of that. Actually, multiple versions of this that have slightly different constraints. Because of course, there are many ways to factorise this matrix. It's an unconstrained problem, so you need to bring some constraints. We'd try to bring in minimalist ones in several different ways. It turns out, the results really don't depend strongly on this. And so, the cool thing is that the structure that emerges is not based on any hypothesis about functional profiles. Because the labels are not even used in the analysis. And it's not based on any assumption about the anatomy of auditory cortex. Because the locations of these voxels are not known by the analysis. OK. OK so basically, the assumption of this analysis goes like this. Each voxel, as I lamented earlier, is hundreds of thousands of neurons. So the hope here is that there's a relatively small number of kinds of neural populations. And that, each one has a distinctive response profile over those 165 sounds. And that voxels have different ratios of the different neural population types. OK. And so, further, we assume that there's this smallish number of sort of canonical response profiles. Such that we can model the response of each voxel as a linear weighted sum of some small number of components. OK. And so, the goal then is to discover what those components are. And the idea is that each component is basically the response profile and the anatomical distribution of some neural population. OK. So let me just do that one other way here. So we're going to take this matrix, and we're going to factorise it into some set of end components. And each of those components is going to have 165 dimensional vector of its response profile. OK. Each component will also have a weight matrix across the relevant voxels there. OK. Telling us how much that component contributes to each voxel. OK. And then we use, sort of, ICA to find these components. OK. The first thing you do, of course, in any of these problems is, OK, how many? And so, actually Sam did a beautiful analysis, the details of which I'll skip. Because they're complicated, because I actually don't remember all of them. But essentially, you can split the data in half. Model one whole half, and measure how much variance is accounted for in left-out data. And what you find is that, variance accounted for goes up 'til six components. And then goes down, because you start over fitting, right. So we know that there are six components in there. Now, that doesn't mean there are only six kinds of neural populations. That's, in part, a statement about what we can resolve with functional MRI. But we know that with this method, we're looking for six components. That's what it finds. And so, to remind you, the cool thing about the components that we're going to get out, which I'll tell you about in a second, is that nothing about this analysis constrain those components. There are no assumptions that went in there, right. So if you think about it, if all we can resolve for the response of each voxel to each sound is, say, high versus low. That's conservative. I think we can resolve, you know, a finer grain of magnitude of response. But even if it's just high or low, there are 2 to the 165 possible response profiles in here. Right. We're searching a massive space. Anything is possible. Right. And similarly, the anatomical weight distributions are completely unconstrained, with respect to whether they're clustered, overlapping, a speckly mess, any of those things. OK. So what did we find? I just said all this. OK, so we're looking for the response profiles and their distribution. OK speckly mess, right. Just said that. OK. So what we get with the response profiles is, four of them are things we already knew about auditory cortex. One is high frequency selectivity, and one is low frequency selectivity. That's tonotopic cortex. That's the one thing we knew really solidly. A third thing we find is a response to pitch, which is different than frequency. I'll skip the details. But we'd actually published a paper the year before, showing that there's a patch of cortex that likes pitch in particular. And that, that's not the same as frequency. And it popped out as one of the components. A fourth one is a somewhat controversial claim about spectral temporal modulation, which many people have written about. The idea that this is somehow a useful basis set in auditory representations. And we found what seems to be a response that fits that. OK. So all of those are either totally expected, or kind of in line with a number of prior papers. It's the last two that are the cool ones. OK. And the numbers-- actually, the numbers refer to-- never mind. The numbers are largely arbitrary. The numbers are for dramatic effect, really. Component four. OK. So here's what one of these last two components is. So now what we have is, this is a magnitude of response of that component. Remember, component is two things. It's got this profile here, and it's got the distribution over the cortex. So this is the profile to the 165 sounds. The colors refer to different categories of sound. We put them on Mechanical Turk and had people stick 1 of 10 different familiar labels. And so dark green is English speech and light green is foreign speech, not understood to the subjects. This just pops right out. We didn't twiddle. We didn't fuss. We didn't look for this, it just popped out. And light blue is singing. So this is a response to speech, a really selective response to speech. It's not language, because it doesn't care if it's English or foreign. So this is not something about representing language meaning, it's about representing the sounds of speech that are present here, and to some extent in vocal music. Pretty amazing. Now there have been a number of reports from functional MRI and from intracranial recordings, suggesting cortical regions selected for speech. This wasn't completely unprecedented, although it's certainly the strongest evidence for specificity. You can see that in this profile here. Right. Dark purple is the next thing you get to after the language and the singing. So you get all the way down before you're at dark purple. And dark purple is non-speech human vocalizations, stuff like laughing, and crying, and singing. Right. Which are similar in some ways. Not exactly speech, but it's similar. So that's damn selective. Pretty cool. Yeah? OK. I just said all that. OK the other component is even cooler, and here it is. OK. Here's the code. Non-vocal music and vocal music, or singing. This is a music selective response. This has never been reported before. Many people have looked. We have looked, it hasn't been found. We think that we were able to find this music selective response. In fact, we have evidence that we were able to find this music selective response. In large part, because of the use of this linear weighting model. If you then-- I got to show you where these things are in the brain. OK. Running out of time, so I'm accelerating here. We did a bunch of low level acoustic controls to show that these things really are selective. You can't account for them. They don't get the same response if you scramble them. They really have to do with the structure of speech and music. I'll skip all that. Right. So now we can take those things, those components, and project them back of the brain. And say, where are they? OK. So first, let's do the reality check. Here's tonotopic cortex mapped in the usual hypothesis-driven way. And now, we're going to put outlines, just as landmarks, on the high and low frequency parts of tonotopic cortex. And so, I mentioned before that, one of the components was low frequencies. Here it is. Perfectly aligning with frequency mapping, that this one pops out of the natural sound experiment, the ICA on the natural sounds. And this one is based on hypothesis-driven mapping. So that's a nice reality check. But what about speech cortex? Well, here it is. OK. So the white and black outlines are primary auditory cortex. And you see this band of speech selectivity right below it. Situated strategically between auditory cortex and language cortex, which is right below it, actually. Not shown here, but we know from other studies. So that's pretty cool. Here's where the music stuff is. It's anterior of primary auditory cortex. And there's a little bit behind it as well. OK. So we think we were able to find the music selectivity, when it wasn't found before with functional MRI. Because this method enables us to discover selective components, even if they overlap within voxels with other components. Because our linear weighting model takes it apart and discovers the underlying latent component, which may be very selective. Even if, in all of the voxels, it's mixed in with something else. So actually, if you go in and you look at the same data. And you say, let's look for voxels that are individually very music selective, you can't really find them. Because they overlap a little bit with the pitch response and with some of the other stuff. So the standard methods can't find the selectivity in the way that we can with this kind of mathematical decomposition, which is really thrilling. I can say one more thing, and I'll take a question. And the final thing is, we have recently had the opportunity to reality check this stuff, by using intracranial recording from patients who have electrodes right on the surface of their brain. And we've done this in three subjects now. And in each subject, we see-- sorry, this is hard to see. These are responses of two different electrodes over time. So the stimulus last two seconds. So that's 0 to 2 seconds. This is time. And this is a speech selective electrode responding to native speech, foreign speech, and singing. And here is another electrode that responds to instrumental music in purple, and singing in blue. And so what this shows is, we can validate the selectivity. With intracranial recording, we can see that selectivity in individual electrodes, that you can't see in individual voxels. So that sort of validates having to go through the tunnel of math to infer the latent selective components underneath. Because we can see them in the raw data in the intracranial recording. So this is cool, because nobody even knows why people have music in the first place. And so the very idea that there are, apparently, bits of brain that are selectively engaged in processing music, is radical, and fascinating, and deeply puzzling. So, you know, one of the speculations about why we have music-- Steve Pinker famously wrote in one of his books that music is auditory cheesecake. By which he meant that, music is not like some special purpose thing, it just pings a bunch of preexisting mechanisms. Like, you know, fat, and sweet, and all that stuff. Right. And so that idea is that music kind of makes use of mechanisms that exist for other reasons. And I think this argues otherwise. If you have selective brain regions, we don't know that they're innate. They're quite possibly learned. But they sure aren't piggybacking on other mechanisms. Those regions are pretty selective for music, as far as we can tell. |
MIT_RES9003_Brains_Minds_and_Machines_Summer_Course_Summer_2015 | Lecture_81_Russ_Tedrake_MITs_Entry_in_the_DARPA_Robotics_Challenge.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RUSS TEDRAKE: I've been getting to play with this robot for a few years now-- three years of my life basically devoted to that robot. It was one of the most exciting, technically challenging, exhausting, stressful, but ultimately fulfilling things I've ever done. We got to basically take this robot, make it drive a car, get out of the car-- that was tough-- open the door, turn valves, pick up a drill, cut a hole out of the wall. Notice there's no safety hardness. It's battery autonomous. It has a walk over some rough terrain, climbed some stairs at the end. It had to do this in front of an audience. Basically, we got two tries. And if your robot breaks, it breaks, right? And there was a $2 million prize at the end. We wanted to do it not for the $2 million prize, but for the technical challenge. And myself and a group of students, just like I said, absolutely devoted our lives to this. We spent all of our waking hours on this. We worked incredibly, incredibly hard. So just to give you a little bit of context, DARPA, our national defense funding agency, has gotten excited about the idea of these grant challenges, which get people to work really, really hard. The self-driving cars were the first one. MIT had a very successful team in the Urban Challenge led by John. And then it's unquestionably had transition impact into the world via Google, Uber, Apple, and John will tell you all about it. I think in 2012, DARPA was scratching their heads, saying, people haven't worked hard enough. And what's the new challenge going to be? And right around that time, there was a disaster that maybe helped focus their attention towards disaster response. So ultimately, it was October 2012 that everything started with this kickoff for the DARPA Robotics Challenge. The official challenge was cast in the light of disaster response using the scenario of the nuclear disaster as a backdrop. But I think really their goal was to evaluate and advance the state of the art in mobile manipulation. So if I'm the funding agency, what I think is that you see hardware coming out of industry that is fantastic. So Boston Dynamics was building these walking robots and the like. This one is the one we've been playing, Atlas, built by Boston Dynamics, which is now Google. AUDIENCE: Alphabet. RUSS TEDRAKE: Alphabet, yeah. And then I think from the research labs, we've been seeing really sophisticated algorithms coming out but on relatively modest hardware. And I think it was time for a mash up, right? So they were very interesting in the way they set up the competition. It wasn't about making it a completely autonomous robot. There was there was a twist. You could have a human operator, but they wanted to encourage autonomy. So what they did is they had a degraded network link between the human and the robot and some reward for going a little bit faster than the other guy. So the idea would be that if you had to stop and work over the degraded network link and control every joint of your robot, then you're going to be slower than the guy whose robot is making the decisions by itself. That didn't play out as much as we expected, but that was the setup. That set up a spectrum where people could do full teleoperation, meaning joystick control of each of the joints if they wanted to. And maybe the goal is to have complete autonomy, and you can pick your place on the spectrum. So MIT, possibly to a fault, aimed for the full autonomy side. The idea was, let's just get a few clicks of information from the human. Let the human solve the really, really hard problems that he could solve efficiently-- object recognition. Scene understanding-- we don't have to do that, but a few clicks from the human can communicate that. But let the robot do all of the dynamics and control and planning side of things. So those few clicks should see nearly autonomous algorithms for perception, planning, and control. OK. So technically, I don't intend to go into too many details, but I would love to answer questions if you guys ask. And we can talk as much as we want about it. But the overarching theme to our approach when we're controlling, perceiving, everything is to formulate everything as an optimization problem. So even the simplest example in robotics is the inverse kinematics problem where you're just trying to decide if I want to put my hand in some particular place. I have to figure out if I have a goal in the world coordinates. I have to figure out what the joint coordinates should be to make that happen. So we have joint positions in some vector q, and we just say, I'd like to be as close as possible. I have some comfortable position for my robot. We formulate the problem as an optimization-- say, I'd like to be as close to comfortable as possible in some simple cost function. And then I'm going to start putting in constraints, like my hand is in the desired configuration. But we have very advanced constraints. So especially for the balancing humanoid, we can say, for instance, that the center mass has to be inside the support polygon. We can say, we're about to manipulate something. So I'd like the thing I'm going to manipulate to be in the cone of visibility of my vision sensors. I'd like my hand to approach. It doesn't matter where it approaches along the table, maybe, but the palm should be orthogonal to the table and should approach like this. And we put in more and more sophisticated collision avoidance type constraints and everything like this, and the optimization framework as is general and can accept those type of constraints. And then we can solve them extremely efficiently with highly optimized algorithms. So for instance, that helped us with what I like to call the big robot little car problem. So we have a very big robot. It's a 400 pound, six foot something machine. And they asked us to drive a very little car to the point where the robot physically does not fit behind the steering wheel-- impossible. It just doesn't kinematically. Torso's too big, steering wheel's right there, no chance. So you have to drive from the passenger seat. You have to put your foot over the console. You have to drive like this, and then our only option was to get out of the passenger side. So that was a hard problem kinematically, but we have this rich library of optimizations. We can drag it around. We can explore different kinematic configurations of the robot. But we also use the same language of optimization and constraints, and then we put in the dynamics of the robot as another constraint. And we can start doing efficient dynamic motion planning with the same tools. So for instance, if we wanted Atlas to suddenly start jumping off cinder blocks or running, we did a lot of work in that regard to make our optimization algorithms efficient enough to scale to very complex motions that could be planned on the fly at interactive rates. So one of the things you might be familiar with-- Honda ASIMO is one of the famous robots that walks around like this, and it's a beautiful machine. They are extremely good at real time planning using limiting assumptions of keeping your center mass at a constant height and things like this. And one of the questions we asked is, can we take some of the insights that have worked so well on those robots and generalize them to more general dynamic tasks? And one of the big ideas I want to try to communicate quickly is that even though our robot is extremely complicated, there's sort of a low dimensional problem sitting inside the big high dimensional problem. So if I start worrying about every joint angle in my hand while I'm thinking about walking, I'm dead, right? So actually, when you're thinking about walking, even doing gymnastics or something like this, I think the fundamental representation is the dynamics of your center of mass, your angular momentum, some bulk dynamics of your robot, and the contact forces you're exerting on the world, which are also constrained. And in this sort of six dimensional-- 12 dimensional if you have velocities-- space with these relatively limited constraints, you can actually do very efficient planning and then map that in a second pass back to the full figure out what my pinky's going to do. So we do that. We spent a lot of time doing that, and we can now plan motions for complicated humanoids that were far beyond our ability to do it a few years ago. This was a major effort for us. My kids and I were watching American Ninja Warrior at the time, so we did all the Ninja Warrior tasks. So there were some algorithmic ideas that were required for that. It was also just a software engineering exercise to build a dynamics engine that provided analytical gradients, exposed all the sparsity in the problem, and wrote custom solvers and things like that to make that work. It's not just about humanoids. We spent a day after we got Atlas doing those things to show that we could make a quadruped run around using the same exact algorithms. It took literally less than a day to make all these examples work. There's another level of optimization that's kicking around in here. So the humanoid, in some sense when it's moving around, is a fairly continuous dynamical system. There's punctuations when your foot hits the ground or something like this, so you think of that as sort of a smooth optimization problem. There's also a discrete optimization problem sitting in there, too, even for walking. So if you think about it, the methods I just talked about-- we're really talking about, OK, I move like this. I would prefer to move something like this, but there's a continuum of solutions I could possibly take. For walking, there's also this problem of just saying, am I going to move my right foot first or my left foot first? Am I going to step on cinder block one or cinder block two? There really is a discrete problem which gives a combinatorial problem if you have to make long-term decisions on that. And one of the things we've tried to do well is be very explicit about modeling the discrete aspects and the continuous aspects of the problem individually and using the right solvers that could think about both of those together. So here's an example of how we do interactive footstep planning with the robot. If it's standing in front of some perceived cinder blocks, for instance, the human can quickly label discrete regions just by moving a mouse around. The regions that come out are actually fit by an algorithm. They look small, because they're trying to figure out if the center of the foot was inside that region, the whole foot would fit on that. And they're also thinking about balance constraints and other things like that. But now we have discrete regions to possibly step in. We have a combinatorial problem and the smooth problem of moving my center of mass and the like, and we have very good new solvers to do that. And seeded inside that, I just want to communicate that there's all these little technical nuggets. We had to find a new way to make really fast approximations of big convex regions of free space. So we have optimizations that just figured out-- the problem of finding the biggest polygon that fits inside all those obstacles is NP hard. We're not going to solve that. But it turns out finding a pretty good polygon can be done extremely fast now. And the particular way we did it scales to very high dimensions and complicated obstacles to the point where we could do it on raw sensor data, and that was an enabling technology for us. So our robot now, when it's making plans-- so the one on the left is just walking towards the goal. The one on the right, we removed a cinder block. And normally, a robot would kind of get confused and stuck, because it's just thinking about this local plan, local plan, local plan. It wouldn't be able to stop and go completely in the other direction. But now, since we have this higher level combinatorial planning on top, we can make these big, long-term decision making tasks at interactive rates. Also, the robot was too big to walk through a door, so we had to walk sideways through a door. And that was sort of a standing challenge. The guy who started the program putting footsteps down by hand said, whatever I do in footstep planning, I will never lay down footsteps to walk through a door again. That was the challenge. We did a lot of work on the balancing control for the robot, so it's a force controlled robot using hydraulic actuators everywhere. Again, I won't go into the details, but we thought a lot about the dynamics of the robot. How do you cast that as an efficient optimization that we can solve on the fly? And we were solving an optimization at a kilohertz to balance the robot. So you put it all together. And as a basic competency, how well does our robot walk around and balance? Here's one of the examples at a normal speed from the challenge. So the robot just puts its footsteps down ahead. The operator is mostly just watching and giving high level directions. I want to go over here, and the robot's doing its own thing. Now, all the other teams I know about were putting down the footsteps by hand on the obstacles. I don't know if someone else was doing it autonomously. We chose to do it autonomously. We were a little bit faster because of it, but I don't know if it was enabling. But very proud of our walking, even though it's still conservative. This is lousy compared to a human. Yeah? AUDIENCE: So the obstacles are modeled by the robot's vision, or do you actually preset them? RUSS TEDRAKE: So we knew they were going to be cinder blocks. We didn't know the orientation or positions of them, so we had a cinder block fitting algorithm that would run on the fly, snap things into place with the cameras-- actually, laser scanner. And then we walk up stairs. Little things-- if you care about walking, the heels are hanging off the back. There's special algorithms in there to balance on partial foot contact and things like that. And that made the difference. We could go up there efficiently, robustly. So I would say though, for conservative walking, it really works well. We could plant these things on the fly. And we also had this user interface that if the foot step planner ever did something stupid, the human could just drag a foot around, add a new constraint to the solver. It would continue to solve with a new constraint and adjust its solutions. We could do more dynamic plans. We could have it run everything like that. We actually never tried this on the robot before the competition, because we were terrified of breaking the robot, and we couldn't accept the downtime. But now that the competition's over, this is exactly what we're trying. But the optimizations are slower and didn't always succeed. So in the real scenario, we were putting some more constraints on and doing much more conservative gaits. The balance control I'd say worked extremely well. So the hardest task was this getting out of the car task. We worked like crazy. We didn't work on it until the end. I thought DARPA was going to scratch it, honestly. But in the last month, it became clear that we had to do it. And then we spent a lot of effort on it. And we put the car in every possible situation. This was on cinder blocks. It's way high. It has to step down almost beyond its reachability in the leg. This thing was just super solid. So Andres and Lucas were the main designers of this algorithm. I'd say it's superhuman in this regard, right? A human would not do that, of course, but standing on one foot while someone's jumping on the car like this-- it really works well. In fact, the hardest part of that for the algorithm was the fact that it's trying to find out where the ground is, and the camera's going like this. So that was the reason it had this long pause before it went down. But there was one time that it didn't work well, and it's hard for me to watch this. But it turns out on the first-- you saw that little kick? This was horrible. I'll tell you exactly what happened, but I think it really exposed the limitation of the state of the art. So what happened in that particular situation was the robot was almost autonomous in some ways, and we basically tried to have the human have to do almost nothing. And in the end, we got the humans checklist down to about five items, which was probably a mistake, because we screwed up on the checklist. So one of the five items was to-- we have one set of programs that are running when the robot's driving the car. And then all the human had to do was turn off the driving controller and turn on the balancing controller. But it was exciting and the first day of the competition. And we turned on the balancing controller, forgot to turn off the driving controller. So the ankle was still trying to drive the car. Even that, the controller was robust enough. So I really think there's this fundamental thing that if you're close to your nominal plan, things were very robust. But what happened is the ankle is still driving the car. I think we could balance with the ankle doing the wrong thing, except the ankle did the wrong thing just enough that the tailbone hit the seat of the car. That was no longer something we could handle, right? So there was no contact sensor in the butt. That meant the dynamics model was very wrong. The state estimator got very confused. The foot came off the ground, and the state estimator had an assumption that the feet should be on the ground. That's how it knew where it was in the world. And basically, the controller was hosed, right? And that was the only time we could have done that badly-- the vibrations and everything. I had emails from people of all walks of life telling me what they thought was wrong with the brain of the robot from shaking like that. But that was a bad thing. So you know I think fundamentally, if we're thinking about plans-- and that's what we know how to do at scale for high dimensional systems is single solutions-- then we're close to the plan. Things are good. When we're far from the plan, we're not very good. And a change in the contact situation-- even if it's in a Cartesian space very close-- change in the contact situation is a big change to the plan. There's lots of ways to address it. We're doing all of them now. It's all fundamentally about robustness. But ironically, the car was the only time we could have done that badly, right? So every other place, we worked out all these situations where, OK, the robot's walking, and then something bad happens and someone lances you or something. We had recovery. And then even if it tried to take a step-- even if that failed, it would go into a gentle mode where it would protect its hands, because we were afraid of breaking the hands. It would fall very generally to the ground. All that was good. We turned it off exactly once in the competition. We turned it off when we were in the car, because we can't take a step to recover when you're in the car and you're the same size of the car. And we didn't even want to protect our hands, because once we got our hand stuck on the steering wheel. So anyways, that was the only thing we could have sort of shaken ourselves silly and fallen. And what happened? We fell down with our 400 pound robot. We broke the arm-- the right arm. Sadly, all of our practices ever were doing all the tasks right handed, but we got to show off a different form of robustness. So actually, because we had so much autonomy in the system, we flipped a bit and said, let's use the left arm for everything. Which is more than just map the joint coordinates over here. It meant you had to walk up to the door on the other side of the door. The implications back up quite a bit. After having our arm just completely hosed, we were able to go through and do all the rest of the tasks except for the drill, which required two hands. We couldn't do that one. We had to pick up the drill and turn it on. So we ended the day in second place with a different display of robustness. AUDIENCE: That's still pretty damn good. RUSS TEDRAKE: We were happy, but not as happy as if we had not fallen. OK. So I think walking around, balancing-- we're pretty good, but there's a limitation. I really do think everybody has that limitation to some extent. The manipulation capabilities of the robot were pretty limited, just because we didn't need to do it for the challenge. The manipulation requirements were minimal. You had to open doors. Picking up a drill was the most complicated thing. We actually had a lot of really nice robotic hands to play with, but they all broke when you started really running them through these hard tests. So we ended up with these sort of lobster claw kind of grippers, because they didn't break. And they were robust, and they worked well. But it limited what we could do in manipulation. Again, the planning worked very well. We could pick up a board and even plan to make sure that the board now didn't intersect with other boards in the world. And we have really good planning capabilities, and those worked at interactive rates-- the kinematic plans. But the grasping was open loop, so there's really no feedback. So there's current sensing just to not overheat the hands. But basically, you do a lot of thinking to figure out how to get your hand near the board. And then you kind of close your eyes and go like this, and hope it lands in the hand. And most of the time, it does. Every once in awhile, it doesn't. We experimented with every touch sensor we could get our hands on. That wasn't meant to be a pun. And we tried cameras and everything, but they were all just too fragile and difficult to use for the competition. We're doing a lot of work now doing optimization for grasping, but I'll skip over that for time. So the other piece was, how does the human come into the perception side of the story? So one of these tasks was moving debris out from in front of a door. This is what it looked like in the original version of the competition-- the trials. The robot would come up and throw these boards out of the way, and you see the human operators over there with her big console of displays. This is what the laser in the robot's head sees. We have a spinning laser. We also have stereo vision. But the laser reconstruction of this gives you a mess of points. If you asked a vision algorithm-- some of you are vision experts I'm sure in the room. If you asked a vision algorithm to figure out what's going on in that mess of points, it's an extremely hard problem. But we have a human in the loop. So the idea is that one or two clicks from a human can turn that from an intractable problem to a pretty simple problem. Just say, there's a two by four here. And then now a local search can do sort of RANSAC type local optimizations to find the best fit to a two by four to that local group of points, and that works well. And so the robot didn't have to think about the messy point clouds when it's doing its planning. It could think about the simplified geometry from the CAD models. And most of the planning was just on the CAD models. So this is what it looks like to drive the robot. So you click somewhere saying there's a valve, then the perception algorithm finds a valve. Then the robot starts going. It actually shows you a ghost of what it's about to do. And then if you're happy with it, and if all things are going well, you just watch. But if it looks like it's about to do something stupid, you can come in, stop, interact, change the plans, and let it do its thing. It's kind of fun to watch the robot view of the world, right? So this is what the robot sees. It throws down its footsteps. It's deciding how to walk up to that valve. Again, when the right arm was broken, this was one of our practice runs. The right arm was broken, it had a valve. We had to bit flip, and now it had to walk over to the other side of the valve. And there's a lot of things going on. A lot of pieces had to work well together to make all this work. One of the questions that I'll get before you ask it. If you've written it down, OK, that's fine. Why were the robots so slow? Why were they standing still? A lot of people out there waiting for the human, maybe, but for us it wasn't. It wasn't the planning time. The planning algorithms were super fast. Most of the time, we were waiting for sensor data. And that meant there was two things. There was waiting for the laser to spin completely around and also just being conservative-- wanting to get that laser data while the robot was stopped. And then there was getting the laser data back to the computer that had the fast planning algorithms in back. So if there was a network blackout, we had to wait a little bit, and that meant we were standing still. But we've actually done a lot of work in lab to show that we don't have to stand still. This is now the robot walking with its laser blindfolded and using only stereo vision using one of the capabilities that came out of John's lab and others to do stereo fusion. The laser gives very accurate points, but it gives them slowly at a low rate. And you have to wait for it to spin around. The camera is very dense, very high rate, but very noisy. And John and others have developed these new algorithms that can do real time filtering of that noisy data, and we demonstrated that they were good enough to do walking on. And so we put all the pieces together-- real time footstep planning, real time balancing, real time perception-- and we were able to show we can walk continuously. This will be the future. So we had to do networking. We optimized network systems. We had to build servers, unit test logistics, politics. It was exhausting. I think it was overall incredibly good experience-- a huge success. I think the robots can move faster with only small changes mostly on the perception side. The walking was sufficient. We can definitely do better. The manipulation was very basic. I think we need to do better there, but we didn't have to for those tasks. The robustness dominated everything. So I'll just end and take questions, but I'll show this sort of fun, again, robot view. This is the robot's God's eye view of the world while it's doing all these tasks. You can sort of see what the robot labels with the geometry and what it's leading its points. And it's just kind of fun to have on in the background, and then I'll take any questions. [APPLAUSE] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.